The Future (Probably) Isn’t as Scary as You Think (Ep. 257)
Our latest Freakonomics Radio episode is called “The Future (Probably) Isn’t as Scary as You Think.” (You can subscribe to the podcast at iTunes or elsewhere, get the RSS feed, or listen via the media player above.)
Internet pioneer Kevin Kelly tries to predict the future by identifying what’s truly inevitable. How worried should we be? Yes, robots will probably take your job — but the future will still be pretty great.
Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.
* * *
[MUSIC: Milan Grajetzki, “Into Deep Space” (from Makes Nose Ants 2 You)]
When you try to envision the future, what do you see? Do you see a grim picture? A world, perhaps, in which humans have become marginalized? Where technologies created to help us have gained the upper hand? The film industry takes a rather dim view of the future, doesn’t it?
KEVIN KELLY: Indeed, I can’t think of a single Hollywood movie about the future on this planet that I want to live in.
KELLY: In the ‘50s and ‘60s when I was growing up, there was a hope of everything after the year 2000. And that’s the future that I remembered.
[MUSIC: Scott W. Hallgren, “City of the Future”]
In a new book called The Inevitable, Kelly tries to see whether his youthful optimism squares with the technological realities of today and tomorrow. Short answer: yes.
KELLY: I think that this is the best time in the world ever to make something and make something happen. All the necessary resources that you wanted to make something have never been easier to get to than right now. So from the view of the past, this is the best time ever.
And it’s getting better.
KELLY: Artificial intelligence will become a commodity like electricity, which will be delivered to you over the grid called the cloud. You can buy as much of it as you want and most of its power will be invisible to you as well.
But of course it could all go wrong:
KELLY: We’ve never invented a technology that could not be weaponized. And the more powerful a technology is, the more powerfully it will be abused.
* * *
Kevin Kelly, a writer and thinker, has what he calls “a bad case of optimism.”
KELLY: It’s rooted in the fact that on average, for the past 100 years or so, things have improved incrementally a few percent a year in growth. And while it’s possible that next year that stops and goes away, the probable, statistics view of it is that it will continue.
[MUSIC: The Hipwaders, “Wake Up Reprise” (from A Kindie Christmas)]
Kelly envisions a world where ever-more information is available at any time, summoned by a small hand gesture or voice command; where virtual reality augments our view of just about everything; where artificial intelligence is seamlessly stitched into our every move.
KELLY: Most of AI is going to be invisible to us. That’s one of the signs of the success of a technology — is it becomes invisible.
So invisible that without our even knowing about it, AI will read our medical imaging and approve our mortgages. It’ll drive our cars, of course, and perhaps become our confidante.
KELLY: So I think our awareness of it, for the most part, will be as a presence in our lives, and we take for granted very very quickly, in the same way that we take for granted Google. People don’t realize — I try to stress to my son — you know, when I was growing up you couldn’t have your questions answered. You didn’t ask questions because there was no way to get them answered. And now we just routinely ask dozens and dozens of questions a day that we would never have asked back then and yet, we just sort of take it for granted. And I think a lot of the AI will be involved in assisting us in our schedules, our days, answering questions as a partner in getting things done. So, think of it as a GPS for your life. In the way that you kind of set your course on the GPS, and then it’s going along and it’s telling how to go, but oftentimes you’re overriding it and it’s not bothered by that; it’s got another plan right away. And then you change your mind, and it’s, “Oh no problem, I’ve got another one — I’ve got another schedule here; I’ll do this over here; I’ll get this ready for you; I’ll make this reservation; I’ll buy this thing. No problem, you changed your mind; I’ll send it back, no problem.” Kind of like having a presence that is anticipating and helping your life — I think that’s what it looks like even I would say within 20 years.
DUBNER: So we’ve done quite a few episodes of Freakonomics Radio that address the future, especially when it comes to the interface between technology and employment. The idea of whether there will be “enough” jobs for people, whatever “enough” means. You write that, and I’ll quote you, “The robot takeover will be epic,” which I’m sure will scare some people. And that even information-intensive jobs — doctor, lawyer, architect, programmer, probably writers and podcasters too — can be automated. So even if this is what the technology can and wants to accomplish, it strikes me that the political class may well try to stymie it. I’m curious your views on that.
KELLY: Yeah. There was this really great survey/poll that Pew did. Basically they asked people how likely they thought that 50 percent of the jobs would be replaced by robots or AIs. And it was like 80 percent of people. Then they followed this up with how likely they thought that their job would be replaced. Nobody believed that their job would be. And it was across the board. And I did the same exact survey actually in a crowd of people who came to my book party, 200 people. We had instant polling devices, and I asked the same thing. It was exactly the same pattern. Everyone believes that most of the jobs will be replaced, and no one believes that their job will be replaced. And I think it’s actually neither. I think most of these — our jobs are bundles of different tasks and some of those tasks or maybe many of those tasks will be automated, but they’ll basically redefine the kinds of things that we do. So a lot of the jobs are going to be reinvented rather than displaced. Particularly in the kinds of things we’re talking about in the professional classes. I’m not saying that AI can’t be creative; in can be. In fact, we’re going to be shocked — in some senses we’re going to realize creativity isn’t so creative. Creativity is actually fairly mechanical, that we will actually be able to figure out how to have AIs be creative. But the thing is that they’re going to be creative in a different way than we are. I think probably the last job that AIs or robots will do will be comedian. I mean they’ll just have a different sense of humor. They won’t get us in the same way that we get us. Even though they’ll be incredibly creative, they can be brilliant, they’ll be smart, they’ll surprise us in many ways, they’re still not going to do it exactly like we do it, and I think we will continue to value that.
DUBNER: But that assumes — which is if you watch a certain kind of futurist movie or read a certain kind of futurist book — that assumes that the artificial intelligence doesn’t essentially obliterate or marginalize us, yes?
KELLY: Right. The question is whether an artificial intelligence that we create can only gain at our expense? And I think that while that’s a possibility that we should not rule out, it’s an unlikely possibility.
[MUSIC: Richard Murray, “A Human Race” (from Borealis)]
DUBNER: You write, Kevin, that: “This is not a race against the machines. If we race against them, we lose. This is a race with the machines.” Talk about how that begins to happen — whether it’s a shift in mindset, a shift in engineering. How does it happen that we come to view AI or robotization or automation or computerization as more of a continuing ally than a threat?
KELLY: So one of the first AIs we made which was sort of a dedicated standalone supercomputer, the IBM Deep Blue, who beat the reigning chess master at the time, Gary Kasparov, and this was kind of the first big challenge to human exceptionalism, basically. And when Kasparov lost, there were several things that went through people’s minds. One is: well that’s the end of chess. It’s like, who’s going to play competitively because computers are always going to win? And that didn’t happen. In a funny kind of way, playing against computers actually increased the extent to which chess became popular. And, on average, the best players became better playing against the artificial minds. And then finally, Kasparov, who lost, realized at the time, that — he said, “you know, it’s kind of unfair because if I had access to the same database that Deep Blue had of every single chess move ever, I could have won.” And so he invented a new chess league — it’s kind of like the freestyle league, like kind of free martial arts, you can play any way you want. So you can play as an AI or you can play as a human or you can play as a team of AI and humans. And in general, what’s happened in the past couple of years is the best chess player on this planet is not an AI. And it’s not a human. It’s the team that he calls centaurs; it’s the team of humans and AI. Because they’re complementary. Because AIs think differently than humans. And the same of the world’s best medical diagnostician is not Watson, it’s not a human doctor. It’s the team of Watson plus doctor. And that idea of teaming up is going to work because inherently, AIs think differently — even though they’re going to be creative, even though they’ll make decisions, even though they’ll have a type of, eventually, consciousness, it’s going to be different than ours because we are running on a different substrate. It’s not a zero sum game.
DUBNER: And how much AI was applied to the writing of the book? I mean obviously, you use spell check and things like that. But I’m curious if there’s anything else.
KELLY: Not as much as I would like. Because AI’s been around for 50 years and it’s been a very slow progress and that’s because it was very, very expensive to do.
[MUSIC: Johnny Fiasco, “I’ve Lost My Floppy”]
AI was expensive because good artificial intelligence requires a lot of data and a lot of what’s called “parallel processing” power. But the cost has come down, an unexpected gift from the video-game industry.
KELLY: The reason why we have this sudden surge into AI right now in the last couple of years is because it turned out that to do video gaming, you really needed to have parallel processing chips and they had these video chips, graphical processor units, that were being produced to make video gaming fast, and they were being produced in such quantities that actually the price went down and became a commodity. And the AI researchers discovered a few years ago that they could actually do AI not on these big expensive multi-million-dollar supercomputers, but on a big array of these really cheap GPUs.
Add to that the fact that more and more objects are being equipped with tiny sensors and microchips, creating the so-called “Internet of Things.” As Kelly writes, in 2015 alone, five quintillion transistors — that’s 10 to the power of 18 — “were embedded into objects other than computers.” Which means we will be adding artificial intelligence as quickly and easily as people in the industrial era added electricity and motors to manually-operated tools.
KELLY: I believe people will look back at this time, will look back at year 2016 and say, “Oh my gosh, if only I could have been alive then. There’s so much I could have done so easily.” And here we are.
But one area of technology isn’t keeping up:
KELLY: And that’s batteries. I think a lot of this internet of things, the idea that we take all the — your shoe and your clothes and the chair and the books and light bulbs in house and all of them are connected, I think part of what’s holding that back is not the sensors’ and the chips’ intelligence, but the power. What I don’t want to do is spend all my Saturdays replacing all the batteries in all the things in my house.
DUBNER: Predicting the future anytime, in any realm, is fairly perilous. You admit in your book that you missed a lot about how the internet would develop, for instance. So without meaning to sound like a total jerk, let me just ask you: why should we believe anything you’re telling us today about the future?
[MUSIC: j. cowit, “One Question at a Time” (from Your Princess Is in Another Castle)]
KELLY: Yeah, I think every futurist, including myself, is basically trying to predict the present. And so you should believe me to the extent that it’s useful to helping you understand what’s going on now. As much as possible, I’m not really trying to make predictions as much as I am trying to illuminate the current trends that are working in the world. These are ongoing processes; these are directions rather than destinies. These are general movements that have been happening for 20 or 30 years. And so, I’m saying these things are already happening, it looks like they are going in the same direction. And so I might be wrong, and I probably will be wrong on much of it, but I think if you see what I’m seeing, I think you will agree that it is happening right now and that can be useful to anybody who is trying to make something happen or make their lives better. Let me just give you a quick little parallel about what I mean by “inevitable,” which is the title of the book. I’m talking about long-term processes rather than particulars. And so imagine rain falling down into a valley. The path of a particular drop of rain as it hits the ground and goes down the valley is inherently unpredictable. It’s not at all something you can predict. But the direction is inevitable, which is downward. And so I’m talking about those kinds of large-scale, downward forces that kind of pull things in a certain direction and not the particulars. And so I would say that in a certain sense, the arrival of telephones was inevitable. Basically, no matter what political regime or economic system that you’d have, you would get telephones once you had electricity and wires. And while telephones were inevitable, the iPhone was not. The species, the particular product or company wasn’t. So I’m not talking about the particulars of certain inventions; I’m talking about the general forms of things.
[MUSIC: Pat Andrews, “Tiki”]
* * *
[MUSIC: Abstrakters, “Euphonic Creatures” (from Abstrakters)]
Not long ago, I got a text from a friend. It just said, “have you read the inevitable?” I thought, “the inevitable? What’s that?” I had no idea, but it sure sounded scary. Did North Korea finally bomb someone? Did Donald Trump finally fire one of his own kids? But no, that’s not what my friend meant. The Inevitable was the title of a new book by Kevin Kelly about the future.
DUBNER: I do have to say, the title of the book sounds to me at least a little scary. And I’m wondering, were you trying to scare us a little bit or no?
KELLY: No, I wasn’t trying to scare people. I think people are scared enough as it is. I was trying to suggest basically what it meant literally, which is that we need to accept these things in the large form. And part of the message of the book, which is a little bit subtle, which is that large forms are inevitable but the specifics and particulars are not. And we have a lot of control over those, and we should accept the large forms in order to steer the particulars.
One of the inevitable trends that Kelly points out is “dematerialization“— the fact that it takes so much less stuff to make the products we use.
KELLY: Yeah, that’s a long ongoing trend. The most common way is to use design to be able to make something that does at least the same with smaller amount of matter. An example I would give is like the beer can, which started off as being made of steel and is basically the same shape and size, but has reduced almost a third of its weight by using better design.
But you can see how this trend can snowball. Instead of 100 books on a shelf, I have one e-reader. Instead of 1,000 CDs, a cache of MP3 files, which I may own or, more likely, borrow from the cloud whenever I want them.
KELLY: The current example would be the way that people are reimagining a car, which is a very physical thing, as a ride service that you don’t need to necessarily buy the car and keep it parked in your garage and then parked at work, not being used, when you could actually have access to the transportation service that a thing like Uber or taxis or buses or public transportation give. So you get the same benefits with less matter.
DUBNER: But one of the things that’s sitting in my recording booth is a hardcover copy of your book, The Inevitable. It just strikes me as so weird that for a set of ideas that we’re talking about today, that it is still published in classic, dead-tree format. And I’m curious whether you felt that there was a little bit of a paradox in that? Or are you happy to exploit technologies from previous generations for as long as they’re still useful, even if in small measure?
KELLY: So there’s a couple of things to say about it. Let me say the larger thing first and then get to the specifics about the book. Most of the things in our homes are old technology. Most of the stuff that surrounds us is concrete, steel, electrical lights. These are ancient technologies in many ways. And they form the bulk of it. And they will continue to form the bulk of it. So in 50 years from now, most of the technology in people’s lives will be old stuff. We tend to think of technology as anything that was invented after we were born. But in fact, it’s all the old stuff really. And so I take an additive view. And I had this surprise in my previous book and many people have challenged it, but never successfully, and that is that there has not been a globally extinct technology. Technologies don’t go away, basically; they just become invisible into the infrastructure. And so, yes, there will be paper books forever. They will become much more expensive, and they may kind of premium and luxury items, but they’ll be around simply because we don’t do away with the old things. I mean, there are kind of like more blacksmiths alive today than there were ever before. More people making telescopes by hand than ever before. So lots of these things, they just don’t go away. But they’re no longer culturally dominant. And so that’s — paper books will not be culturally dominant, and in fact, this book The Inevitable has digital versions.
[MUSIC: Fiction Band, “About Time”]
DUBNER: I’d love for you to talk for just a couple of minutes about the ongoing need for maintenance, even when the technological infrastructure we’re building and using everyday wouldn’t seem to be as inherently physical and in need of maintenance as the old infrastructure.
KELLY: Yeah, that was a surprise to me that I changed my mind about. My early introduction to the digital world there was the promise of the endurable nature of bits. When you made a copy of something, it was perfectly exact copy. And there was a sense that there was no moving parts. Kind of like a flash drive thing. There’s no moving parts; it will never break! But it turns out in a kind of weird way that in more ways than we suspected, the intangible is kind of like living things; it’s kind of like biology in the sense that it’s so complicated and interrelated inside that things do break. And there are little tiny failures, whether it be inside a chip or particular bit that can have cascading effects and that can actually make your thing sick or broken. And that was a surprise to me, that software would rot, that computer chips would break and that, in general, that the amount of time and energy you would have to dedicate to digital intangible things was almost as equal to the physical realm was a surprise to me, and I think a lesson for us into the future.
DUBNER: Did it change how you think about running your own technological life? You write that you used to be one of the last guys to update everything because, you know, I got used to things the way they are, I don’t need the update or the upgrade. How has that changed how you do it now?
KELLY: Yeah. Well, I learned by being burned, by experience, by waiting until the last minute to upgrade that it was horrible and that it was more traumatic, in the sense that when I did eventually upgrade, I had to upgrade not just the current system, but everything else that it touched, forming this sort of chain reaction where upgrading one thing required upgrading the other which required upgrading the other. And that when I did these calculations and changed my mode and tried to upgrade pretty fast as soon as — maybe not the very first one but the next one after that — that it was actually kind of — it was like flossing. It was like hygiene; you just sort of keep up to date because in the end you actually spent less time and energy, it was less traumatic, and you gained all the benefits of that upgrade. And so there is this sort of digital hygiene approach to things that I take now. And that’s not the only way that I changed. I also realized that the purchase price is just one of the prices that you pay when you bring something into your life. That there is this other thing that you do actually have an ecosystem, even in your household, even in your workplace, whatever it is, and that bringing something on, you’re now committed to tending it.
DUBNER: When you’re talking about bringing something into your home, I thought the one product that I’ve seen that number — they usually call it cost of ownership, I guess — is cars. And I don’t think it’s the car manufacturers themselves who calculate that for you — maybe it is, I don’t know — but you do see, this car, here’s what it’s actually going to cost you over its lifetime, in terms of how much fuel it uses versus another car, how much maintenance it will require versus another car, because of the high-end components that it may have, what the cost of replacement will be for those, and I like that. But I want that calculation attached to everything. I want that calculation attached to the people that come into my life even.
KELLY: Yeah, no. Actually I think you’re onto something. I think this idea of calculating the cost of ownership for digital devices or software apps for that matter would be very, very valuable and would not actually be that hard to derive because everything is being kind of logged in some capacity.
[MUSIC: Justin Marcellus, “It’s Like That” (from Justin Marcellus)]
By looking deeply into the present, Kelly sees a future where more and more of our moves are being tracked, whether because of data we voluntarily make public, as on Facebook, or otherwise.
KELLY: Inevitably we will be tracking more and more of our lives, and we will be tracked more and more and that’s inevitable and what we have a choice about is the particulars of how we do that, whether we do that civilly or not. We have to engage this. I was maybe a little bit frustrated by the fact that there’s often initial reaction from many corners of trying to prohibit things before we know what they are. And that’s called the precautionary principle, which says simply that there are things that we should not allow in our lives until they’re proven harmless, and I think that doesn’t work.
DUBNER: Has that ever happened with a major invention, period?
KELLY: That we proved that it was harmless?
DUBNER: Yeah. Before it being, you know, let’s say widely adopted.
KELLY: In general, no. I don’t think that there has ever been that, and I think it is kind of unfair to request it. But, it does seem to be a current motion, like say, in the genetically modified crop area. So people saying we can’t have these because we can’t prove that they’re harmless. And so, there are attempts to do that with AI driving, a robot car, saying, “no, no, you can’t have robot cars on the road until we prove that they are completely safe.” And that’s not going to happen and that’s unfair because even though there’s a few people who die from robot cars a year, humans kill one million people worldwide a year. And we’re not banning humans from driving.
DUBNER: In the future that you envision, who are the biggest winners and losers?
KELLY: I think it’s all comparative. I think there will certainly be people who gain more than others and to them, who only gain a little, that might seem that they lost. But I suspect that everybody will be gaining something. And perhaps the poorest in the world will continue to gain the most over time. But there will be people who won’t gain as much as many others. I don’t want to call them losers, but those people, I think, are going to, by and large, be those who will be unable to retrain or unwilling to retrain. And I think retraining or learning is going to be kind of like a fundamental survival skill. Because it’s not just the poor who have to be retrained; I think even the professionals, the people who have jobs, who are the middle class, I think this is going to be an ongoing thing for all of us is we are probably going to be changing our careers, changing our business card, changing our title many times in our life. And I think there will be the resources to retrain them. Whether there’s the political will, I don’t know. I kind of take a Buckminster Fuller position which is that if you look at the resources, they’re all there. There’s enough food for everybody. The reason why there’s famine is not because there’s not enough food, but it’s because there isn’t a political will to distribute it.
DUBNER: And it only takes one bad actor to ruin the livelihood of a couple hundred thousand or a million people. That’s a leverage that exists even in humans, forget about machines.
KELLY: Exactly. So I think this technology is going to benefit, or can benefit, everybody. But whether they do or not, whether specifically, whether they do, that is a choice that we have to make and it will make a huge difference. So, in an abstract sense, I think this technology does not necessarily make losers, but that doesn’t mean that there won’t be because I think we do have choices about how we make things specifically. The internet was inevitable, but the kind of internet that we made was not and that was a choice that we make whether we made it transnational or international, whether it was commercial or nonprofit. Those choices are choices that we have; those choices make a huge difference to us. And so I think inherently the technology has the power to benefit everybody and not make losers. But that’s a political choice in terms of the particulars of how it’s applied, and therefore, I think we do have to have those choices.
DUBNER: It also seems, just out of fairness to your argument really, that just as you can’t foresee all the benefits of what technology will give birth to, nor can you see the downsides, right? I mean, there’s just no way for any one of us sitting here now to see what that’s really going to be.
KELLY: Yeah, right. We’ve never invented a technology that could not be weaponized. And the more powerful a technology is, the more powerfully it will be abused. And I think this technology that we’re making is going to be some of the most powerful technology we’ve ever made. Therefore, it will be powerfully abused.
DUBNER: And there’s the scary part of the Kevin Kelly view of the future.
KELLY: Exactly right, but here’s the thing. Most of the problems we have in our life today have come from previous technologies. And most of the problems in the future will come from the technologies that we’re inventing today. But I believe that the solution to the problems that technology created is not less technology, but more and better technology. And so I think technology will be abused and that the proper response to those abuses is not less of it, to prohibit it, to try and stop it, to turn it off, to turn it down, is actually to come up with something yet even better to try to remedy it, knowing that that itself will cause new problems, knowing that we then have to make up new technologies to deal with that. And so what do we get out of that race? We get increasing choices and possibilities.
DUBNER: All right, Kevin Kelly, one last question: you argue that technology is prompting us to ask more and better questions, advancing our knowledge and revealing more about what we don’t know. You write, “it’s a safe bet that we have not asked our biggest questions yet.” Do you really think that we haven’t asked, I guess, the essential human questions yet? What are they? And I ask that, of course, with the recognition that if you knew the answer to that question, we wouldn’t be having this conversation.
KELLY: Well, what I meant was: we’re moving into this arena where answers are cheaper and cheaper. And I think as we head into the next 20 or 30 years that if you want an answer you’re going to ask a machine, basically. And the way science moves forward is not just by getting answers to things, but by then having those answers provoke new questions, new explorations, new investigations. And a good question will provoke a probe into the unknown in a certain direction. And I’m saying that the kinds of questions that like, say Einstein had like: what does it look like if you sat on the end of a beam of light and you were travelling through the universe at the front of the light? Those kinds of questions were sort of how he got to his theory of relativity. There are many of those kinds of questions that we haven’t asked ourselves. The kind of question you’re suggesting about what is human is also part of that because I think each time we have an invention in AI that beats us at what we thought we were good at, each time we have a genetic engineering achievement that allows us to change our genes, we are having to go back and redefine ourselves and say, “Wait, wait, wait. What does it mean to be human?” Or “what should we be as humans?” And those questions are things that maybe philosophers have asked, but I think these are the kinds of questions that almost every person is going to be asking themselves almost every day as we have to make some decisions about: is it OK for us to let a robo-soldier decide who to kill? Should that be something that only humans do? Is that our job? Do we want to do that? They are really going to come down to like dinner-table-conversation level of like, what are humans about? What do we want humans to become? What am I, as a human, as a male, as an American? What does that even mean? So I think that we will have an ongoing identity crisis personally and as a species for next, at least, forever.
DUBNER: So I have to say for all this talk of technology and the future of technology, you have weirdly made me feel a bit more human. And for that, I thank you.
KELLY: You know, you’re not a robot because you ask such great questions.
DUBNER: The true test will be how I do at comedy though, correct?
KELLY: Exactly. And you laughed at my jokes, so we know you’re alive as a human.
[MUSIC: Beckah Shae, “See Ya Soon” (from Mighty)]
* * *
Freakonomics Radio is produced by WNYC Studios and Dubner Productions. Today’s episode was produced by Christopher Werth. The rest of our staff includes Arwa Gunja, Jay Cowit, Merritt Jacob, Greg Rosalsky, Caitlin Pierce, Alison Hockenberry, Emma Morgenstern and Harry Huggins. If you want more Freakonomics Radio, you can also find us on Twitter and Facebook and don’t forget to subscribe to this podcast on iTunes or wherever else you get your free, weekly podcasts.
Here’s where you can learn more about the people and ideas in this episode:
- Kevin Kelly, founding executive editor of Wired magazine; author of The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future.
- The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future by Kevin Kelly (Viking, June 2016).