Voices in AI –A Conversation with Matt Grob

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Matt Grob. He is the Executive Vice President of Technology at Qualcomm Technologies, IncGrob joined Qualcomm back in 1991 as an engineer. He also served as Qualcomm’s Chief Technology Officer from 2011 to 2017. He holds a Master of Science in Electrical Engineering from Stanford, and a Bachelor of Science in Electrical Engineering from Bradley University. He holds more than seventy patents. Welcome to the show, Matt.

Matt Grob: Thanks, Byron, it’s great to be here.

So what does artificial intelligence kind of mean to you? What is it, kind of, at a high level? 

Well, it’s the capability that we give to machines to sense and think and act, but it’s more than just writing a program that can go one way or another based on some decision process. Really, artificial intelligence is what we think of when a machine can improve its performance without being reprogrammed, based on gaining more experience or being able to access more data. If it can get better, it can prove its performance; then we think of that as machine learning or artificial intelligence.

It learns from its environment, so every instantiation of it heads off on its own path, off to live its own AI life, is that the basic idea?

Yeah, for a long time we’ve been able to program computers to do what we want. Let’s say, you make a machine that drives your car or does cruise control, and then we observe it, and we go back in and we improve the program and make it a little better. That’s not necessarily what we’re talking about here. We’re talking about the capability of a machine to improve its performance in some measurable way without being reprogrammed, necessarily. Rather it trains or learns from being able to access more data, more experience, or maybe talking to other machines that have learned more things, and therefore improves its ability to reason, improves its ability to make decisions or drive errors down or things like that. It’s those aspects that separate machine learning, and these new fields that everyone is very excited about, from just traditional programming.

When you first started all of that, you said the computer “thinks.” Were you using that word casually or does the computer actually think?

Well, that’s a subject of a lot of debate. I need to point out, my experience, my background, is actually in signal processing and communications theory and modem design, and a number of those aspects relate to machine learning and AI, but, I don’t actually consider myself a deep expert in those fields. But there’s a lot of discussion. I know a number of the really deep experts, and there is a lot of discussion on what “think” actually means, and whether a machine is simply performing a cold computation, or whether it actually possesses true imagination or true creativity, those sorts of elements.

Now in many cases, the kind of machine that might recognize a cat from a dog—and it might be performing a certain algorithm, a neural network that’s implemented with processing elements and storage taps and so forth—is not really thinking like a living thing would do. But nonetheless it’s considering inputs, it’s making decisions, it’s using previous history and previous training. So, in many ways, it is like a thinking process, but it may not have the full, true creativity or emotional response that a living brain might have.

You know it’s really interesting because it’s not just a linguistic question at its core because, either the computer is thinking, or it’s simulating something that thinks. And I think the reason those are different is because they speak to what are the limits, ultimately, of what we can build. 

Alan Turing way back in his essay was talking about, “Can a machine think?” He asked the question sixty-five years ago, and he said that the machine may do it a different way but you still have to call it “thinking. So, with the caveat that you’re not at the vanguard of this technology, do you personally call the ball on that one way or the other, in terms of machine thought?

Yeah, I believe, and I think the prevailing view is, though not everyone agrees, that many of the machines that we have today, the agents that run in our phones, and in the cloud, and can recognize language and conditions are not really, yet, akin to a living brain. They’re very, very useful. They are getting more and more capable. They’re able to go faster, and move more data, and all those things, and many metrics are improving, but they still fall short.

And there’s an open question as to just how far you can take that type of architecture. How close can you get? It may get to the point where, in some constrained ways, it could pass a Turing Test, and if you only had a limited input and output you couldn’t tell the difference between the machine and a person on the other end of the line there, but we’re still a long way away. There are some pretty respected folks who believe that you won’t be able to get the creativity and imagination and those things by simply assembling large numbers of AND gates and processing elements; that you really need to go to a more fundamental description that involves quantum gravity and other effects, and most of the machines we have today don’t do that. So, while we have a rich roadmap ahead of us, with a lot of incredible applications, it’s still going to be a while before we really create a real brain.

Wow, so there’s a lot going on in there. One thing I just heard was, and correct me if I’m saying this wrong, that you don’t believe we can necessarily build an artificial general intelligence using, like, a Von Neumann architecture, like a desktop computer. And that what we’re building on that trajectory can get better and better and better, but it won’t ever have that spark, and that what we’re going to need are the next generation of quantum computer, or just a fundamentally different architecture, and maybe those can emulate human brain’s functionality, not necessarily how it does it but what it can do. Is that fair? Is that what you’re saying? 

Yeah, that is fair, and I think there are some folks who believe that is the case. Now, it’s not universally accepted. I’m kind of citing some viewpoints from folks like physicist Roger Penrose, and there’s a group around him—Penrose Institute, now being formed—that are exploring these things and they will make some very interesting points about the model that you use. If you take a brain and you try to model a neuron, you can do so, in an efficient way with a couple lines of mathematics, and you can replicate that in silicon with gates and processors, and you can put hundreds of thousands, or millions, or billions of them together and, sure, you can create a function that learns, and can recognize images, and control motors, and do things and it’s good. But whether or not it can actually have true creativity, many will argue that a model has to include effects of quantum gravity, and without that we won’t really have these “real brains.”

You read in the press about both the fears and the possible benefits of these kinds of machines, that may not happen until we reach the point where we’re really going beyond, as you said, Von Neumann, or even other structures just based on gates. Until we get beyond that, those fears or those positive effects, either one, may not occur.

Let’s talk about Penrose for a minute. His basic thesisand you probably know this better than I dois that Gödel’s incompleteness theorem says that the system we’re building can’t actually duplicate what a human brain can do. 

Or said another way, he says there are certain mathematical problems that are not able to be solved with an algorithm. They can’t be solved algorithmically, but that a human can solve them. And he uses that to say, therefore, a human brain is not a computational device that just runs algorithms, that it’s doing something more; and he, of course, thinks quantum tunneling and all of that. So, do you think that’s what’s going on in the brain, do you think the brain is fundamentally non-computational?

Well, again, I have to be a little reserved with my answer to that because it’s not an area that I feel I have a great deep background in. I’ve met Roger, and other folks around him, and some of the folks on the other side of this debate, too, and we’ve had a lot of discussions. We’ve worked on computational neuroscience at Qualcomm for ten years; not thirty years, but ten years, for sure. We started making artificial brains that were based on the spiking neuron technique, which is a very biologically inspired technique. And again, they are processing machines and they can do many things, but they can’t quite do what a real brain can do.

An example that was given to me was the proof of Fermat’s Last Theorem. If you’re familiar with Fermat’s Last Theorem, it was written down I think maybe two hundred years ago or more, and the creator, Fermat, a mathematician, wrote in the margin of his notebook that he had a proof for it, but then he never got to prove it. I think he lost his life. And it wasn’t until about twenty-some years ago where a researcher at Berkeley finally proved it. It’s claimed that the insight and creativity required to do that work would not be possible by simply assembling a sufficient number of AND gates and training them on previous geometry and math constructs, and then giving it this one and having the proof come out. It’s just not possible. There had to be some extra magic there, which Roger, and others, would argue requires quantum effects. And if you believe that—and I obviously find it very reasonable and I respect these folks, but I don’t claim that my own background informs me enough on that one—it seems very reasonable; it mirrors the experience we had here for a decade when we were building these kinds of machines.

I think we’ve got a way to go before some of these sci-fi type scenarios play out. Not that they won’t happen, but it’s not going to be right around the corner. But what is right around the corner is a lot of greatly improved capabilities as these techniques basically fundamentally replace traditional signal processing for many fields. We’re using it for image and sound, of course, but now we’re starting to use it in cameras, in modems and controllers, in complex management of complex systems, all kinds of functions. It’s really exciting what’s going on, but we still have a way to go before we get, you know, the ultimate.

Back to the theorem you just referenced, and I could be wrong about this, but I recall that he actually wrote a surprisingly simple proof to this theorem, which now some people say he was just wrong, that there isn’t a simple proof for it. But because everybody believed there was a proof for it, we eventually solved it. 

Do you know the story about a guy named Danzig back in the 30s? He was a graduate student in statistics, and his professor had written two famous unsolved problems on the chalkboard and said, These are famous unsolved programs. Well, Danzig comes in late to class, and he sees them and just assumes they’re the homework. He writes them down, and takes them home, and, you can guess, he solves them both. He remarked later that they seemed a little harder than normal. So, he turned them in, and it was about two weeks before the professor looked at them and realized what they were. And it’s just fascinating to think that, like, that guy has the same brain I have, I mean it’s far better and all that, but when you think about all those capabilities that are somewhere probably in there. 

Those are wonderful stories. I love them. There’s one about Gauss when he was six years old, or eight years old, and the teacher punished the class, told everyone to add up the numbers from one to one hundred. And he did it in an instant because he realized that 100 + 0 is 100, and 99 + 1 is 100, and 98 + 2 is 100, and you can multiply those by 50. The question is, “Is a machine based on neural nets, and coefficients, and logistic regression, and SVM and those techniques, capable of that kind of insight?” Likely it is not. And there is some special magic required for that to actually happen.

I will only ask you one more question on that topic and then let’s dial it back in more to the immediate future. You said, “special magic. And again, I have to ask you, like I asked you about “think, are you using magic colloquially, or is it just physics that we don’t understand yet? 

I would argue it’s probably the latter. With the term “magic,” there’s famous Arthur C. Clarke quote that, “Sufficiently advanced technology is indistinguishable from magic.” I think, in this case, the structure of a real brain and how it actually works, we might think of it as magic until we understand more than we do now. But it seems like you have to go into a deeper level, and a simple function assembled from logic gates is not enough.

In the more present day, how would you describe where we are with the science? Because it seems we’re at a place where you’re still pleasantly surprised when something works. It’s like, “Wow, it’s kind of cool, that worked.” And as much as there are these milestone events like AlphaGo, or Watson, or the one that beat the poker players recently, how quickly do you think advances really are coming? Or is it the hope for those advances that’s really kind of what’s revved up?

I think the advances are coming very rapidly, because there’s an exponential nature. You’ve got machines that have processing power which is increasing in an exponential manner, and whether it continues to do so is another question, but right now it is. You’ve got memory, which is increasing in an exponential manner. And then you’ve also got scale, which is the number of these devices that exist and your ability to connect to them. And I’d really like to get into that a little bit, too, the ability of a user to tap into a huge amount of resource. So, you’ve got all of those combined with algorithmic improvements, and, especially right now, there’s such a tremendous interest in the industry to work on these things, so lots of very talented graduates are pouring into the field. The product of all those effects is causing very, very rapid improvement. Even though in some cases the fundamental algorithm might be based on an idea from the 70s or 80s, we’re able to refine that algorithm, we’re able to couple that with far more processing power at a much lower cost than as ever before. And as a result, we’re getting incredible capabilities.

I was fortunate enough to have a dinner with the head of a Google Translate project recently, and he told me—an incredibly nice guy—that that program is now one of the largest AI projects in the world, and has a billion users. So, a billion users can walk around with their device and basically speak any language and listen to any language or read it, and that’s a tremendous accomplishment. That’s really a powerful thing, and a very good thing. And so, yeah, those things are happening right now. We’re in an era of rapid, rapid improvement in those capabilities.

What do you think is going to be the next watershed event? We’re going to have these incremental advances, and there’s going to be more self-driving cars and all of these things. But these moments that capture the popular imagination, like when the best Go player in the world loses, what do you think will be another one of those for the future?

When you talk about AlphaGo and Watson playing Jeopardy and those things, those are significant events, but they’re machines that someone wheels in, and they are big machines, and they hook them up and they run, but you don’t really have them available in the mobile environment. We’re on the verge now of having that kind of computing power, not just available to one person doing a game show, or the Go champion in a special setting, but available to everyone at a reasonable cost, wherever they are, at any time. Also, to be able to benefit, the learning experience of one person can benefit the rest. And so, that, I think, is the next step. It’s when you can use that capability, which is already growing as I described, and make it available in a mobile environment, ubiquitously, at reasonable cost, then you’re going to have incredible things.

Autonomous vehicles is an example, because that’s a mobile thing. It needs a lot of processing power, and it needs processing power local to it, on the device, but also needs to access tremendous capability in the network, and it needs to do so at high reliability, and at low latency and some interesting details there—so vehicles is a very good example. Vehicles is also something that we need to improve dramatically, from a safety standpoint, versus where we are today. It’s critical to the economies of cities and nations, so a lot of scale. So, yeah, that’s a good crucible for this.

But there are many others. Medical devices, huge applications there. And again, you want, in many cases, a very powerful capability in the cloud or in the network, but also at the device, there are many cases where you’d want to be able to do some processing right there, that can make the device more powerful or more economical, and that’s a mobile use case. So, I think there will be applications there; there can be applications in education, entertainment, certainly games, management of resources like power and electricity and heating and cooling and all that. It’s really a wide swath but the combination of connectivity with this capability together is really going to do it.

Let’s talk about the immediate future. As you know, with regard to these technologies, there’s kind of three different narratives about their effect on employment. One is that they’re going to take every single job, everybody from a poet on down; that doesn’t sound like something that would resonate with you because of the conversation we just had. Another is that this technology is going to replace a lot of lowskilled workers, there’s going to be fewer, quote, lowskilled jobs,” whatever those are, and that you’re going to have this permanent underclass of unemployed people competing essentially with machines for work. And then there’s another narrative that says, “No, what’s going to happen is the same thing that happened with electricity, with motors, with everything else. People take that technology they use it to increase their own productivity, and they go on to raise their income that way. And you’re not going to have essentially any disruption, just like you didn’t have any disruption when we went from animal power to machine power. Which of those narratives do you identify with, or is there a different way you would say it?

Okay, I’m glad you asked this because this is a hugely important question and I do want to make some comments. I’ve had the benefit of participating in the World Economic Forum, and I’ve talked to Brynjolfsson and McAfee, the authors of The Second Machine Age, and the whole theme of the forum a year ago was Klaus Schwab’s book The Fourth Industrial Age and the rise of cyber-physical systems and what impact they will have. I think we know some things from history and the question is, is the future going to repeat that or not? We know that there’s the so-called Luddite fallacy which says that, “When these machines come they’re going to displace all the jobs.” And we know that a thousand years ago, ninety-nine percent of the population was involved in food production, and today, I don’t know, don’t quote me on this, but it’s like 0.5 percent or something like that. Because we had massive productivity gains, we didn’t need to have that many people working on food production, and they found the ability to do other things. It’s definitely true that increases in unemployment did not keep pace with increases in productivity. Productivity went up orders of magnitude, unemployment did not go up, quote, “on the orders of magnitude,” and that’s been the history for a thousand years. And even more recently if you look at the government statistics on productivity, they are not increasing. Actually, some people are alarmed that they’re not increasing faster than they are, they don’t really reflect a spike that would suggest some of these negative scenarios.

Now, having said that, it is true that we are at a place now where machines, even with their processing that they use today, based on neural networks and SVMs and things like that, they are able to replace a lot of the existing manual or repetitive type tasks. I think society as a whole is going to benefit tremendously, and there’s going to be some groups that we’ll have to take some care about. There’s been discussions of universal basic incomes, which I think is a good idea. Bill Gates recently had an article about some tax ideas for machines. It’s a good idea, of course. Very hard to implement because you have to define what a robot is. You know, something like a car or a wheel, a wheel is a labor-saving device, do you tax it? I don’t know.

So, to get back to your question, I think it is true that there will be some groups that are in the short term displaced, but there’s no horizon where many things that people do, like caring for each other, like teaching each other, those kinds of jobs are not going away, they’re in ever-increasing demand. So, there’ll be a migration, not necessarily a wholesale replacement. And we do have to take care with the transient effect of that, and maybe a universal type of wage might be part of an answer. I don’t claim to have the answer completely. I mean it’s obviously a really hard problem that the world is grappling with. But I do feel, fundamentally, that the overall effect of all of this is going to be net positive. We’re going to make more efficient use of our resources, we’re going to provide services and capabilities that have never been possible before that everyone can have, and it’s going to be a net positive.

That’s an optimistic view, but it’s a very measured optimistic view. Let me play devil’s advocate from that side to say, why do you think there’ll be any disruption? What does that case look like? 

Because, if you think about it, in 1995 if somebody said, “Hey, you know what, if we take a bunch of computers and we connect them all via TCP/IP, and we build a protocol, maybe HTTP, to communicate, and maybe a markup language like HTMLyou know what’s going to happen? Two billion people will connect and it’s going to create trillions and trillions and trillions of dollars of wealth. It’s going to create Google and eBay and Amazon and Baidu. It’s going to transform every aspect of society, and create an enormous number of jobs. And Etsy will come along, and people will be able to work from home. And all these thousands of things that float out of it.” You never would have made those connections, right? You never would have said, “Oh, that logically flows from snapping a bunch of computers together.” 

So, if we really are in a technological boom that’s going to dwarf that, really won’t the problem be an immense shortage of people? There’s going to be all of these opportunities, and very few people relatively to fill them. So, why the measured optimism for somebody who just waxed so poetic about what a big deal these technologies are?

Okay, that’s a great question. I mean, that was super. You asked will there be any disruption at all. I completely believe that we really have not a job shortage, but a skills shortage; that is the issue. And so, the burden goes then to the educational system, and the fabric of society to be able to place a value on good education and stick to it long enough that you can come up to speed in the modern sense, and be able to contribute beyond what the machines do. That is going to be a shortage, and anyone who has those skills is going to be in a good situation. But you can have disruption even in that environment.

You can have an environment where you have a skills shortage not a job shortage, and there’s disruption because the skills shortage gets worse and there’s a lot of individuals whose previous skills are no longer useful and they need to change. And that’s the tough thing. How do you retrain, in a transient case, when these advancements come very quickly? How do you manage that? What is fair? How does society distribute its wealth? I mean the mechanisms are going to change.

Right now, it’s starting to become true that just simply the manner in which you consume stuff; if that data is available, that has value in itself, and maybe people should be compensated for it. Today, they are not as much, they give it up when they sign in to these major cloud player services, and so those kinds of things will have to change. I’ll give you an anecdote.

Recently I went to Korea, and I met some startups there, and one of the things that happens, especially in non-curated app stores, is people develop games and they put in their effort and time and they develop a game, and they put it on there and people download it for ninety-nine cents or whatever, and they get some money. But, there are some bad actors that will see a new game, they’ll quickly download it, un-assemble the language back to the source, change a few little things and republish that same game that looks and feels just like the original but the ninety-nine cents goes to a different place. They basically steal the work. So, this is a bad thing, and in response, there are startups now that make tools that create software that makes it difficult to un-assemble. There are multiple startups that do what I just described and I’m sitting here listening to them and I’m realizing, “Wow, that job—in fact, that industry—didn’t even exist.” That is a new creation of the fact that there are un-curated app stores and mobile devices and games, and it’s an example of the kind of new thing that’s created, that didn’t exist before.

I believe that that process is alive and well, and we’re going to continue to see more of it, and there’s going to continue to be a skills shortage more than a job shortage, and so that’s why I have a fundamentally positive view. But it is going to be challenging to meet the demands of that skills shortage. Society has to place the right value on that type of education and we all have to work together to make that happen.

You have two different threads going on there. One is this idea that we have a skills shortage, and we need to rethink education. And another one that you touched on is the way that money flows, and can people be compensated for their data, and so forth. I’d like to talk about the first one, and again, I’d like to challenge the measured amount of your optimism. 

I’ll start off by saying I agree with you, that, at the beginning of the Industrial Revolution there was a vigorous debate in the United States about the value of post-literacy education. Like think about that: ipost-literacy education worth anything? Because in an agrarian society, maybe it wasn’t for most people. Once you learn to read, that was what you needed. And then people said, “No, no, the jobs of the future are going to need more education. We should invest in that now.” And the United States became the first country in the world to guarantee that every single person could graduate from high school. And you can make a really good case, that I completely believe, that that was a major source of our economic ascendancy in the twentieth century. And, therefore, you can extend the argument by saying, “Maybe we need grades thirteen and fourteen now, and they’re vocational, and we need to do that again. I’m with you entirely, but we don’t have that right now. And so, what’s going to happen? 

Here is where I would question the measured amount of your optimism which is… People often say to me, “Look, this technology creates all these new jobs at the high-end, like graphic designers and geneticists and programmers, and it destroys jobs at the low-end. Are those people down at the low-end going to become programmers?” And, of course, the answer is not, “Yes.” The answer isand here’s my questionall that matters is, “Can everybody do a job just a little harder than the one they’re currently doing? And if the answer to that is, “Yes, then what happens is the college biology professor becomes a geneticist, the high school biology teacher becomes a college teacher, the substitute teacher gets backfilled into the biology one, and all the way down so that everybody gets just a little step up. Everybody just has to push themselves a little more, and the whole system phase shifts up, and everybody gets a raise and everybody gets a promotion. That‘s really what happened in the Industrial Revolution, so why is it that you don’t think that that is going to be as smooth as I have just painted it? 

Well, I think what you described does happen and is happening. If you look at—and again, I’m speaking from my own experience here as an engineer in a high-tech company—any engineer in a high-tech company, and you look at their output right now, and you compare it to a year or two before, they’ve all done what you describe, which is to do a little bit more, and to do something that’s a little bit harder. And we’ve all been able to do that because the fundamental processes involved improve. The tools, the fabric available to you to design things, the shared experience of the teams around you that you tap into—all those things improved. So, everyone is actually doing a job that’s a little bit harder than they did before, at least if you’re a designer.

You also cited some other examples, a teacher at one level going to the next level. That’s a kind of a queue, and there’s only so many spots at so many levels based on the demographics of the population. So not everyone can move in that direction, but they can all—at a given grade level—endeavor to teach more. Like, our kids, the math they do now is unbelievable. They are as much as a year or so ahead of when I was in high school, and I thought that we were doing pretty good stuff then, but now it’s even more.

I am optimistic that those things are going to happen, but you do have a labor force of certain types of jobs, where people are maybe doing them for ten, twenty, thirty years, and all of a sudden that is displaced. It’s hard to ask someone who’s done a repetitive task for much of their career to suddenly do something more sophisticated and different. That is the problem that we as a society have to address. We have to still value those individuals, and find a way—like a universal wage or something like that—so they can still have a good experience. Because if you don’t, then you really could have a dangerous situation. So, again, I feel overall positive, but I think there’s some pockets that are going to require some difficult thinking, and we’ve got to grapple with it.

Alright. I agree with your overall premise, but I will point out that that’s exactly what everybody said about the farmers—that you can’t take these people that have farmed for twenty or thirty years, and all of a sudden expect them to be able to work in a factory. The rhythm of the day is different, they have a supervisor, there’s bells that ring, they have to do different jobs, all of this stuff; and yet, that’s exactly what happened. 

I think there’s a tendency to short human ability. That being said, technological advance, interestingly, distributes its financial gains in a very unequal measure and there is something in there that I do agree we need to think about. 

Let’s talk about Qualcomm. You are the EVP of technology. You were the CTO. You’ve got seventy patents, like I said in your intro. What is Qualcomm’s role in this world? How are you working to build the better tomorrow? 

Okay, great. We provide connections between people, and increasingly between their worlds and between devices. Let me be specific about what I mean by that. When the company started—by the way, I’ve been at Qualcomm since ‘91, company started in ‘85-‘86 timeframe—one of the first things we did early on was we improved the performance and capacity of cellular networks by a huge amount. And that allowed operators like Verizon, AT&T, and Sprint—although they had different names back then—to offer, initially, voice services to large numbers of people at reasonably low cost. And the devices, thanks to the work of Qualcomm and others, got smaller, had longer battery life, and so forth. As time went on, it was originally connecting people with voice and text, and then it became faster and more capable so you could do pictures and videos, and then you could connect with social networks and web pages and streaming, and you could share large amounts of information.

We’re in an era now where I don’t just send a text message and say, “Oh, I’m skiing down this slope, isn’t this cool.” I can have a 360°, real-time, high-quality, low-latency sharing of my entire experience with another user, or users, somewhere else, and they can be there with me. And there’s all kinds of interesting consumer, industrial, medical, and commercial applications for that.

We’re working on that and we’re a leading developer of the connectivity technology, and also what you do with it on the endpoints—the processors, the camera systems, the user interfaces, the security frameworks that go with it; and now, increasingly, the machine learning and AI capabilities. We’re applying it, of course, to smartphones, but also to automobiles, medical devices, robotics, to industrial cases, and so on.

We’re very excited about the pending arrival of what we call 5G, which is the next generation of cellular technology, and it’s going to show up in the 2019-2020 timeframe. It’s going to be in the field maybe ten, fifteen years just like the previous generations were, and it’s going to provide, again, another big step in the performance of your radio link. And when I say “performance,” I mean the speed, of course, but also the latency will be very low—in many modes it can be millisecond or less. That will allow you to do functions that used to be on one side of the link, you can do on the other side. You can have very reliable systems.

There are a thousand companies participating in the standards process for this. It used to be just primarily the telecom industry, in the past with 3G and 4G—and of course, the telecom industry is very much still involved—but there are so many other businesses that will be enabled with 5G. So, we’re super excited about the impact it’s going to have on many, many businesses. Yeah, that’s what we’re up to these days.

Go with that a little more, paint us a picture. I don’t know if you remember those commercials back in the 90s saying, “Can you imagine sending a fax from the beach? You will!” and other “Can you imagine” scenarios. They kind of all came trueother than that there wasn’t as much faxing as I think they expected. But, what do you think? Tell me some of the things that you thinkin a reasonable amount of time, we’re going to be able to do it, in five years, let’s say.

I’m so fascinated that you used that example, because that one I know very well. Those AT&T commercials, you can still watch them on YouTube, and it’s fun to do so. They did say people will be able to send a fax from the beach, and that particular ad motivated the operators to want to send fax over cellular networks. And we worked on that—I worked on that myself—and we used that as a way to build the fundamental Internet transport, and the fax was kind of the motivation for it. But later, we used the Internet transport for internet access and it became a much, much bigger thing. The next step will be sharing fully immersive experiences, so you can have high-speed, low-latency video in both directions.

Autonomous vehicles, but before we even get to fully autonomous—because there’s some debate about when we’re going to get to a car that you can get into with no steering wheel and it just takes you where you want to go; that’s still a hard problem. Before we have fully autonomous cars that can take you around without a steering wheel, we’re going to have a set of technologies that improve the safety of semiautonomous cars. Things like lane assist, and better cruise control, and better visibility at night, and better navigation; those sorts of things. We’re also working on vehicle-to-vehicle communication, which is another application of low-latency, and can be used to improve safety.

I’ll give you a quick anecdote on that. In some sense we already have a form of it, it’s called brake lights. Right now, when you’re driving down the highway, and the car in front puts on the lights, you see that and then you take action, you may slow down or whatever. You can see a whole bunch of brake lights, if the traffic is starting to back up, and that alerts you to slow down. Brake lights have transitioned from incandescent bulbs which take, like, one hundred milliseconds to turn on to LED bulbs which take one millisecond to turn on. And if you multiply a hundred milliseconds at highway speeds, it’s six to eight feet depending on the speed, and you realize that low-latency can save lives, and make the system more effective.

That’s one of the hallmarks of 5G, is we’re going to be able to connect things at low-latency to improve the safety or the function. Or, in the case of machine learning, where sometimes you want processing to be done in the phone, and sometimes you want to access enormous processing in the cloud, or at the edge. When we say edge, in this context, we mean something very close to the phone, within a small number of hops or routes to get to that processing. If you do that, you can have incredible capability that wasn’t possible before.

To give you an example of what I’m talking about, I recently went to the Mobile World Congress America show in San Francisco, it’s a great show, and I walked through the Verizon booth and I saw a demonstration that they had made. In their demonstration, they had taken a small consumer drone, and I mean it’s a really tiny one—just two or three inches long—that costs $18. All this little thing does is send back video, live video, and you control it with Wi-Fi, and they had it following a red balloon. The way it followed it was, it sent the video to a very powerful edge processing computer, which then performed a sophisticated computer vision and control algorithm and then sent the commands back. So, what you saw was this little low-cost device doing something very sophisticated and powerful, because it had a low-latency connection to a lot of processing power. And then, just to really complete that, they switched it from edge computing, that was right there at the booth, to a cloud-based computing service that was fifty milliseconds away, and once they did that, the little demo wouldn’t function anymore. They were showing the power of low-latency, high-speed video and media-type communication, which enabled a simple device to do something similar to a much more complex device, in real time, and they could offer that almost like a service.

So, that paradigm is very powerful, and it applies to many different use cases. It’s enabled by high-performance connectivity which is something that we supply, and we’re very proficient at that. It impacts machine learning, because it gives you different ways to take advantage of the progress there—you can do it locally, you can do it on the edge, you can do it remotely. When you combine mobile, and all the investment that’s been made there, you leverage that to apply to other devices like automobiles, medical devices, robotics, other kinds of consumer products like wearables and assistant speakers, and those kinds of things. There’s just a vast landscape of technologies and services that all can be improved by what we’ve done, and what 5G will bring. And so, that’s why we’re pretty fired up about the next iteration here.

I assume you have done theoretical thinking about the absolute maximum rate at which data can be transferred. Are we one percent the way there, or ten percent, or can’t even measure it because it’s so smallIs this going to go on forever?

I am so glad you asked. It’s so interesting. This Monday morning, we just put a new piece of artwork in our research center—there’s a piece of artwork on every floor—and on the first floor, when you walk in, there’s a piece of artwork that has Claude Shannon and a number of his equations, including the famous one which is the Shannon capacity limit. That’s the first thing you see when you walk into the research center at Qualcomm. That governs how fast you can move data across a link, and you can’t beat it. There’s no way, any more than you can go faster than the speed of light. So, the question is, “How close are we to that limit?” If you have just two devices, two antennas, and a given amount of spectrum, and a given amount of power, then we can get pretty darn close to that limit. But the question is not that, the question is really, “Are we close to how fast of a service we can offer a mobile user in a dense area?” And to that question, the answer is, “We’re nowhere close.” We can still get significantly better; by that, I mean orders of magnitude better than we are now.

I can tell you three ways that that can be accomplished, and we’re doing all three of them. Number one is, we continue to make better modems, that are more efficient, better receivers, better equalizers, better antennas all of those techniques, and 5G is an example of that.

Number two, we always work with the regulator and operators to bring more spectrum, more radio spectrum to bear. If you look at the overall spectrum chart, only a sliver of it is really used for mobile communication, and we’re going to be able to use a lot more of it, and use more spectrum at high frequencies, like millimeter wave and above, that’s going to make a lot more “highway,” so to speak, for data transfer.

And the third thing is, the average radius of a base station can shrink, and we can use that channel over and over and over again. So right now, if you drive your car, and you listen to a radio station, the radio industry cannot use that channel again until you get hundreds of miles away. In the modern cellular systems, we’re learning how to reuse that channel even when you’re a very short distance away, potentially only feet or tens of meters away, so you can use it again and again and again.

So, with those three pillars, we’re really not close, and everyone can look forward to faster, faster, faster modems. And every time we move that modem speed up, that, of course, is the foundation for bigger screens, and more video, and new use cases that weren’t possible before, at a given price point, which now become possible. We’re not at the end yet, we’ve got a long way to go.

You made a passing reference to Moore’s Lawyou didn’t call it out, but you referenced exponential growth, and that the speed of computers would increase. Everybody always says, Is Moore’s Law finally over? You see those headlines all the time, and, like all the headlines that are a question, the answer is almost always, “No. You’ve made references to quantum computing and all that. Do we have opportunities to increase processor speed well into the future with completely different architectures?

We do. We absolutely do. And I believe that will occur. I mean, we’re not at the limit yet now. You can find “Moore’s Law is over” articles ten years ago also, and somehow it hasn’t happened yet. When we get past three nanometers, yeah, certain things are going to get really, really tough. But then there will be new approaches that will take us there, take us to the next step.

There’s also architectural improvements, and other axes that can be exploited; same thing as I just described to you in wireless. Shannon has said that we can only go so far between two antennas in a given amount of spectrum, in a given amount of power. But we can escape that by increasing the spectrum, increasing the number of distance between the antennas, reusing the spectrum over and over again, and we can still get the job done without breaking any fundamental laws. So, at least for the time being, the exponential growth is still very much intact.

You’ve mentioned Claude Shannon twice. He’s a fascinating character, and one of the things he did that’s kind of monumental was that paper he wrote in 49 or 50 about how a computer could play chess, and he actually figured out an algorithm for that. What was really fascinating about that was, this was one of the first times somebody looked at a computer and saw something other than a calculator. Because up until that point they just did not, and he made that intuitive leap to say, “Here’s how you would make a computer do something other than mathbut it’s really doing math. There’s a fascinating new book about him out called A Mind at Play, which I just read, that I recommend. 

We’re running out of time here. We’re wrapping up. I’m curious do you write, or do you have a place that people who want to follow you can keep track of what you’re up to? 

Well, I don’t have a lot there, but I do have a Twitter, and once in a while I’ll share a few thoughts. I should probably do more of that than I do. I have an internal blog which I should probably do more than I do. I’m sorry to say, I’m not very prolific on external writing, but that is something I would love to do more of.

And my final question is, are you a consumer of science fiction? You quoted Arthur C. Clarke earlier, and I’m curious if you read it, or watch TV, or movies or what have you. And if so, do you have any visions of the future that are in fiction, that you kind of identify with? 

Yes, I will answer an emphatic yes to that. I love all forms of science fiction and one of my favorites is Star Trek. My name spelled backwards is “Borg.” In fact, our chairman Paul Jacobs—I worked for him most of my career—he calls me “Locutus.” Given the discussion we just had—if you’re a fan of Star Trek and, in particular, the Star Trek: The Next Generation shows that were on in the ‘80s and early ‘90s, there was an episode where Commander Data met Mr. Spock. And that was really a good one, because you had Commander Data, who is an android and wants to be human, wants to have emotion and creativity and those things that we discussed, but can’t quite get there, meeting Mr. Spock who is a living thing and trying to purge all emotion and so forth, to just be pure logic, and they had an interaction. I thought that was just really interesting.

But, yes, I follow all science fiction. I like the book Physics of Star Trek by Krauss, I got to meet him once. And it’s amazing how many of the devices and concepts from science fiction have become science fact. In fact, the only difference between science fiction and science fact, is time. Over time we’ve pretty much built everything that people have thought up—communicators, replicators, computers.

I know, you can’t see one of those in-ear Bluetooth devices and not see Uhura, right? That’s what she had.

Correct. That little earpiece is a Bluetooth device. The communicator is a flip phone. The little square memory cartridges were like a floppy disk from the ‘80s. 3-D printers are replicators. We also have software replicators that can replicate and transport. We kind of have the hardware but not quite the way they do yet, but we’ll get there.

Do you think that these science fiction worlds anticipate the world or inadvertently create it? Do we have flip phones because of Star Trek or did Star Trek foresee the flip phone? 

I believe their influence is undeniable.

I agree and a lot of times they say it, right? They say, “Oh, I saw that and I wanted to do that. I wanted to build that.” You know there’s an XPRIZE for making a tricorder, and that came from Star Trek.

We were the sponsor of that XPRIZE and we were highly involved in that. And, yep, that’s exactly right, the inspiration of that was a portable device that can make a bunch of diagnoses, and that is exactly what took place and now we have real ones.

Well, I want to thank you for a fascinating hour. I want to thank you for going on all of these tangents. It was really fascinating. 

Wonderful, thank you as well. I also really enjoyed it, and anytime you want to follow up or talk some more please don’t hesitate. I really enjoyed talking with you.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top