My guest today, Kevin Kelly is the co-founder of Wired magazine, a best-selling author and a futurist. He had the good fortune to stumble onto the Internet when he was just getting started, and he had the foresight to understand it was going to change everything.
KELLY: I said, “There’s something important happening here for the first time. Here’s technology that seems more human scale. It seems more organic. It seems lifelike in some weird way.”
Welcome to People I (Mostly) Admire, with Steve Levitt.
What I find most remarkable about Kevin Kelly is his endless curiosity, which has allowed him to keep his finger on the pulse of fast-changing technology for four decades. My hope today is both to get a sense of how we got to where we are today, technology wise, and especially to hear where he thinks we’re going with innovations like artificial intelligence.
* * *
LEVITT: So I have a friend named John List and he’s a prominent economist who has the misfortune of sharing his name with a mass murderer. So when you Google him, the first John List you see is not him. Instead, you see the mass murderer. I had the opposite experience with you. When I was trying to prepare for today’s conversation, I went first to Amazon.com and I typed in “Kevin Kelly books” and a bunch of books on technology showed up, which I figured were probably you from everything I know about you. But then there were all sorts of other books about photography of Asia, and a graphic novel and a book of advice. And I thought, “God, this is so frustrating. When someone’s got such a common name, that they’re a bunch of Kevin Kelly’s who are all authors.” I slowly and disbelievingly came to realize that the same Kevin Kelly, you, had written all these different books. And I have to say, damn, you have really lived a full life.
KELLY: I have. I’ve been very, very blessed. One could say that I’m very good at doing lots of little things and I have sort of refused to go too far deep in one direction. So that’s a curse of a journalist. You know enough to be dangerous.
LEVITT: Your biography seems to me refreshingly free of a master plan. I’m curious, can you remember what life you thought you were going to lead when you were 18 or 19 and just getting started?
KELLY: The earliest memory that I have was, I don’t know, maybe I was 12 years old and I had this idea that what I wanted to do in my life was to build my own house that was completely self-sufficient and self-recycling and nothing was ever thrown out.There was something elegant about the logic of that. And I was a maker as a kid. I made stuff all the time and I thought that would be the ultimate thing. And then, when the Whole Earth Catalog came along, when I saw that in my senior year of high school, that was my answer.
LEVITT: I have to profess ignorance. I’ve heard of the Whole Earth Catalog. I don’t even really know what it is though.
KELLY: Steve Jobs called it the internet before the internet. It was all the information that you get from the internet and blogs and YouTube, on newsprint. There was, before that time, no other way to find out how to do things in life. Say you wanted to become an advertising agent. Say you wanted to build your own home. Say what you wanted to fix your own car. Where would you go to figure out how to do that? You couldn’t go to the library. They didn’t have that kind of information. If you wanted to buy some ball bearings for something, how would you find out where they were? And so the Whole Earth Catalog was like the first attempt to round up all this kind of enthusiast knowledge and say, “Here’s what you want to know if you want to start your own school. Here’s what you need to know if you want to learn these new things that we call computers.” The Whole Earth Catalog disappeared as soon as the web came because it was the web on newspaper.
LEVITT: College was never part of your plan. Is that true?
KELLY: No. It was actually always part of the plan when I was in high school. That was what everybody was aiming towards. I was both an avid science nerd, doubling up in math and science, and taking every single possible course I could — and art. And in high school I discovered photography, which was just for me a nice combination of the technical and the artistic. But when senior year came around, I just needed to do something other than sit in a desk for four more years. I needed an internship. I needed a gap year. I needed some kind of graduate-level project, but there wasn’t any like that. And believe it or not, you know, I applied to, I think six colleges that I saw in a book just at random. I wound up going to the University of Rhode Island without having ever visited any campuses. And I was very disappointed because it was literally a little tiny state school in the middle of potato fields. It was just the wrong place to be. I just said, I can’t be in grade 13. I need to do something. So I dropped out.
LEVITT: What was your something?
KELLY: Well, my something was photography. I did a workshop in photography because I was only self-taught and photography in those days meant that you had to do the chemistry. You know, you’re doing the chemical developing and the optical stuff and it was very, very technical and then I decided to read books for a year and I was reading books every day, all day. And I happened to read Leaves of Grass by Walt Whitman. And it just blew my gasket. It just — I had to travel at that moment. He completely infected me with his contagious enthusiasm. And it’s like, “I have to see this!” ‘Cause I’ve never been out of New England. I’ve never been anywhere. We didn’t take vacations. So I had a friend who was studying Chinese in Taiwan. And my dad had a friend who lived in Japan and so I said, “Okay, I’m going to go and photograph in Taiwan, China.” And I called up — I called up National Geographic. I found the photo editor’s name in the phone book. I’m 19-years-old and I say, “I’m going to go to Taiwan and Japan. Do you need any photographs?” And bless his heart, Bruce McElfresh, he said, “You know, it doesn’t work that way, but when you come back, show me your photographs” — which he was true to his word. I took a train to Washington and I started showing photographs to National Geographic. I was kind of resigning myself to never having any money, to having lots of time, and maybe building a house that was completely self-sufficient in the woods, doing my little photography, maybe doing some science experiments, making things, which is what I love to do. All that has happened since then would’ve been beyond my belief.
LEVITT: So eventually you worked your way into the middle of the technology boom, and you’ve been there ever since as a close observer of those trends. And I’m curious: what do you think the biggest technological inflection points have been in our lifetime? So the developments that have changed the way we live or the way we will live. What are the biggest ones?
KELLY: Computers themselves were actually quite boring. My dad actually worked with computers and took me to a computer show in, believe it or not, 1965.
LEVITT: Must have been a big building to hold the computers, right?
KELLY: It was a big building and they were big machines. And I was so, so unimpressed. And I basically said, “These are not computers. I know what computers are. Computers are things that you talk to.” I read science fiction. These are things that didn’t even have screens. They had a big typewriter that would print out the answers or the results. You talk to it by having a stack of punch cards. It was like, I have no interest in this. It wasn’t until we married the telephone to the computer that they became interesting to me, and interesting to the world. When we put the modem onto a computer, that was the internet. They became a communications device. And communications is sort of the foundation of culture. And so now we were suddenly accelerating and amplifying the culture and that was incredibly important. I had early access because I became a travel writer in some ways. And I decided to write about this very, very embryonic place — the online world of bulletin boards — as if it was another country.
LEVITT: I experienced a little bit of things like bulletin boards, and I could not have been less impressed. I did not see any future at all. For me, the first moment where I had any inkling that it possibly could be useful was when search became easy, which was a long time after what you’re talking about, right?
KELLY: Right, for those several decades there was an internet. Nobody was very impressed. You had to know a little bit of command code to get anywhere. And it was sort of ignored, even though people like me were saying, “No, this is coming. This is coming.” And it was ignored until this moment in around ’92, ’93 when we had a graphical-user interface and the web, and suddenly bang, everybody got it. You could do search, you could drag and drop stuff. And that’s when everything took off and everybody started paying attention. That was when Wired began, and we were at the center of that. And by the way, fast forwarding a little bit, I think this is exactly the same kind of excitement and energy that we’re seeing right now with A.I., which has been around for at least 10 yearsin its capabilities. And then suddenly we have the conversational user interface. And then all of a sudden you can talk to it in a conversational dialogue way, which is very, very human-like. It’s now the big thing because everybody can understand it and interact with and experience it. And they’re going, “Oh my gosh!” The capabilities of this have been around for a while. But now you have the conversational-user interface , which we’re going to apply to everything, just like we applied screens, the graphical-user interface to everything. So I would say that communications aspect, linking everybody up — and of course, the smartphone and carrying those things in our pockets, enabling billions of people to connect to each other — that only happens one time in the planet. And we are alive for that moment.
LEVITT: I have to say on cell phones too, I could not really see what the point of one was in the beginning when it was just for calls. And I have turned into — I literally live only on my cell phone. I don’t use a computer anymore, except in extreme cases. I have to be beaten over the head by the technology to want to adopt it.
KELLY: I mean, the hardest thing about thinking about the future or trying to describe it is letting go of what we expected it to be and letting go of the current models. And the thing about the future, that we can almost promise, is that it’s going to be unreasonable. You know, what’s going on today, if you tried to pass that off 20 years ago, nobody’s going to believe you. Wired is 30-years-old this year. At the 25-year anniversary, I went back and looked at all the early issues to see what we officially thought the future, of say of the Internet, was going to be. And in the very, very beginning, even Wired got it wrong because what we were kind of expecting was better television. Like, “Oh, there’ll be 500 channels.” Well, the idea that there would be, like, 5 million or 5 billion channels with like YouTube. We had trouble believing that anybody could produce quality video in their bedroom.
LEVITT: Now, these developments, the ones that have actually happened, like the web, YouTube, A.I., they seem inevitable, preordained. But do you think that’s true? Or do you think we could have found ourself in a very different technological world if, you know, a few key people hadn’t been born or some critical investment hadn’t been made — do you think there was a real shot at these technologies not being there?
KELLY: I wrote a book called The Inevitable, so I am obviously biased in that general direction. I think the larger forms of these technologies, the larger contours, which are dictated by physics and chemistry and just the nature of matter — you can do certain things, you can’t do other things. I think the larger outlines of these technologies are inevitable. Once a planetary civilization discovers electricity, and maybe makes speakers, they’re going to have telephones. So the telephones are going to be inevitable, but the iPhone is not necessarily inevitable. The actual specifics are inherently unpredictable. The Internet was inevitable. But the character of the Internet, like who owns it? Is it open sourced or closed source? Is it for-profit or non-profit? Is it transnational or is it nationalistic? And so those characters, we have a lot of choice about. And they make a huge difference to us. I would say that applies to biology — which is, by the way, a controversial statement. If you have evolution of life on a planet similar to the earth, you’re going to get quadruped animals because that is physically very stable. That’s just a natural solution. So quadrupeds will be inevitable; the zebra is not.
We’ll be right back with more of my conversation with Kevin Kelly after this short break.
* * *
LEVITT: You and everybody else seems to think that A.I. is going to be completely transformative. And I don’t really understand. Can you try to put in really simple terms of what you think A.I. is going to do that’s going to be so awesome?
KELLY: It’s like electricity a little bit in the sense that electricity enabled so many other things to happen. And A.I. is this enabling technology. So dealing with an A.I. directly, yes, that’s one form of it, but that’s actually the least important form of it. Most of the A.I. that’s coming will never be visible to you. It’s going to be behind the back offices buried deep into the underground. It’s like infrastructural. It’s going to be used to help us find new drugs. Already it’s happening for that. Here’s a drug that works. Find me something similar. It’s going to be used in medicine for diagnosing things. The A.I. doctor today is not as good as a human doctor, but a human plus an A.I. is better than either a human alone or an A.I. alone. And that A.I. doctor, even though it’s not as good as a human doctor, is a million times better than no doctor. So in medicine it’s this very, very large enabling effect. And that goes across all the other industries where there’s kind of like a bumping up in the levels caused by A.I. independently of our own interactions with them. The way to think about it is not as a singular thing. It’s A.I.’s, plural. We’ll make hundreds, thousands of different varieties of them. And we are going to use them to work with our own minds. There are problems that our own kind of intelligence is probably incapable of solving alone. So solving quantum gravity, dark matter probably is going to require inventing other kinds of minds that we can work with and that’s, of course, going to be incredibly impactful.
LEVITT: So you describe A.I. in really glowing terms. Other people are terrified of A.I. You know, the simplest thing to be afraid of is that you’ll lose your job.
KELLY: I prefer to talk about the actual evidence of how these things are being used rather than their imaginary harms. And so I’ve been trying to track who actually, to this date here that we’re recording, has actually lost their job due to A.I. Medical transcribers — you’re a doctor, you’re looking at an X-ray and you dictate what you’re seeing. And some human was transcribing that into texts that could be shared. Well, the A.I. translators are just so I have not been able to find a single artist anywhere who’s actually lost their job due to A.I. You might lose your job description — doing different tasks, but you’re unlikely to lose your job. And that’s been the case so far in technology. And we find it really hard to imagine what those jobs would be. You know, if in the 1900s, if you told your grandfathers that there would only be 1 percent of farmers, they would say, “What are we going to do?” And you would say, “Oh, I’m going to be a mortgage broker or you’re going to be a web designer, you’re going to be, you know, athletic yoga coach.” They would have no clue what you’re talking about. The number of new jobs that are going to be created by A.I. will exceed whatever loss there are. If A.I. is as powerful as I am suggesting it is, that means it’s going to introduce problems as powerful as it. The more powerful the technology, the more powerful the problems. So I’m not a utopian, I don’t believe that we’re headed into something where there’s diminished numbers of problems. I’m an optimist because I think our capacity to solve problems is increasing even faster.
LEVITT: One thing that confuses me when I try to imagine the path by which A.I. takes over everything — right now, training an A.I. system requires lots of well labeled data. So if you want an A.I. system to distinguish between cats and dogs, you need to show it images of known cats and images of known dogs. And once you do it, A.I. is shockingly good. Or maybe chess is even a better example. All you have to do is tell the A.I. system the rules of chess. And because it can play against itself and gets really strong feedback and it learns what moves works or not, I mean, within hours these amazing systems are better than any human’s ever been. But what’s not so obvious to me is how A.I. systems will get the training data and the extensive, reliable feedback that they’ll need in many of the real-world settings that matter. One of the things I would love as an academic economist would be to have an A.I. friend who would just feed me great ideas for academic economics papers. Do you think that it would be enough feedback for that A.I. to throw out a hundred ideas at a time? And I’d say, “Oh, I like that one. I don’t like that one. I like this one. I don’t like that one.” Do you think it would take almost an infinite path to get to it actually knowing what a great idea was, in my opinion?
KELLY: No, I think we’ll close in on that fairly soon. But let me say something about the current state. So you mentioned dogs and cats. The way that these large language models work is they have to train on millions of examples. A million dogs and a million cats, and they know it. But a human toddler can learn to tell the difference between a cat and a dog with just 12 examples. So we have an existence proof that you don’t actually need to have these kind of large language training sets. That’s important because the only people who can afford to have billions of training sets are the big companies. You have Open A.I. being kind of disruptive, but they had a lot of money behind them. And so if you don’t need that, if you could have these really small sets and have A.I., you take away a lot of the issues of this curation and the ownership stuff. And it means that there’s a lot more room for the small startups to have some impact. Right now, we’re making a lot of assumptions about A.I. based on the current version of them. And again, we want to kind of liberate our minds from the current thing —
LEVITT: We know I’m not good at that.
KELLY: The shock that we’ve had this year is to see that these current models actually can produce, what I call, small-case or minor creativity. They’re trained on the average of human content creation, so what you’re getting is the wisdom of the crowd average and you have to kind of work with them to get them out of that average. We can maybe do better by training them on only the best stuff and that might help. But they will get better. They will be capable of becoming our partners. But they’re still going to be much more likely to produce something that a typical human would produce because they’re auto-completing. One of the best things I find these A.I. useful for is synthesis, where they’re able to like take a spreadsheet here and spreadsheet here, and merge ’em. Or take this field of knowledge and this field of knowledge and what would be a question or some way that the two come together. Or take this kind of painter and this painter and merge their styles. That is something humans could do, but it just takes us so much time to do it that we can’t do it in kind of a frivolous way. We’ll make it into a Ph.D. project. It’ll take four years. And it seems like they’re geniuses. But in fact, we’re going to kind of look back at this time in 30 years from now and realize the problem was that we thought that creativity was some elevated divine level and it turned out to be fairly easy to make.
LEVITT: Do you think there’s any realistic way to regulate A.I. or to use public policy so it does maximum good?
KELLY: Yes, I do. I think it has to be regulated. But I think it’s really, really misguided to try and regulate something we don’t have any idea what it is. We don’t even know what it could do or not do. We’re just discovering that, and yet we’re already trying to regulate it. I think it’s closer to like medical information. It’s a really bad thing to try to devise policy on, like, one or two medical studies. It probably takes around a hundred studies to really get close to what’s going on in something. One or two medical studies, they’re data points themselves. They’re just not sufficient to make policies and we don’t have that level of evidence yet in this stuff to be confident that any regulation that we do would even work because this is so new. And that’s why I preach this idea of street use. We have to use things. We cannot figure out what they’re going to be like by thinking about ’em. They become so complex that predicting their next thing is basically unpredictable. And so the only way we can regulate them is through the evidence of what they’re actually doing. And I think that’s going to take a time, maybe even a generation.
LEVITT: One of the things I find interesting about you is the enthusiasm with which you embrace complexity. You don’t shy away from problems that have complex dynamics, and you find ways to be thoughtful and to make predictions. I presume this delight in complexity, it’s a conscious choice, right?
KELLY: It’s interesting that you put it that way. I’ve never heard it that way, but I think you’re correct. One of the most formative experiences I had was when I went to the first Artificial Life conference in 1987, and it was produced by the Santa Fe Institute, which was something nobody had even heard of at that time. Later on, they started to label what they were doing was the study of complexity. That’s sort of the term that we use now for the kinds of things that in the past we’ve always admired, like, life is a complex system. So maybe a better way to say is I have an affinity for systems and whole systems thinking, sometimes called cybernetics. And one of the things that that often propels you to do is to take a longer view of things because the systems that I’m interested in are operating at a scale where they’re very long term, they’re very long engines. I find that taking a long-now view helps understand what’s really going on and helps direct where we’re going and is, in part, the source of my optimism. Because the longer view you have, I think the more optimistic you can be.
LEVITT: So I have to confess, I’m terrified of complexity. I have — I’ve spent my whole life trying to avoid it in my academic life, picking simple, easy problems.
KELLY: Yes. Right, right. Right, right.
LEVITT: You know, is your real estate agent ripping you off? Are teachers cheating on behalf of the students? These are simple problems I could answer. And I very self-consciously steered myself away from macroeconomics, which is the part of my profession which does tackle complexity. But hearing you talk about complexity and thinking about how economics deals with macroeconomics, they’re completely opposite. The macroeconomists, they also abhor complexity. They have straight jacketed their study of the economy into these incredibly simplified models that are no longer at all realistic, but can be solved using the tools that we use to come up with very simple answers. And so I’ve actually been really frustrated with macroeconomics for a long time. And I think if there were some way to push economists towards your embrace of complexity, it could really do good in macroeconomics.
KELLY: Well, there is a whole contingent at the Santa Fe Institute working with economists to try and do exactly that.
LEVITT: But when you say a contingent, you mean like five out of a thousand. So I understand there are a few, you know, people on the edges, but they’re not easily accepted into the discipline. They’re very much on the fringes and mostly publishing in other kinds of journals.
KELLY: You’re absolutely correct. And, even at their best, a lot of these approaches don’t give really satisfying answers. They often will give you kind of vague trends and probabilities more than anything else, which policy people have no use for because they want some things with answers. If you’re going to be comfortable with complexity, you’re probably going to have to be comfortable with uncertainty, and that makes a lot of people very uncomfortable.
LEVITT: The academic macroeconomists, they might disagree, but my characterization would be they’re not really about explaining what happens in the macro economy anymore. Their focus has become: can we write down a set of internally consistent models and talk to each other about those models? And it seems like there’s a spot in between what’s happening at the Santa Fe Institute and what’s happening in academic economics, which is saying, “Look, I don’t exactly understand why different things are happening in the macro economy, but I’m going to just show you some patterns and I’m going to do more descriptive work and be content to then maybe try to do prediction, knowing that your predictions are coming from a model that isn’t based in traditional ideas of causality. It’s based in more complex new data methods that are black boxes.” And economists abhor black boxes.
KELLY: So I would make a little prediction, being a complete outsider to this which is: I think A.I. will begin to transform this little corner of economics in the same way you just suggested. It did with language translation. The way that Google and these current models do language translation, they bypass the whole idea of what’s the theory of language. You know, the joke was that they made progress to the degree that they fired the linguists. And all they did was to say, “What’s the pattern of a language and spelling? And we’re going to give you the next word based on that.” Brute force patterns. And so I can easily see that starting to head into the macroeconomics, where they’re applying A.I.’s and say, we don’t have a theory, but we can definitely tell you what’s likely to happen next. And there’s going to be people wringing hands and getting upset because we don’t understand what the models are, but they seem to work. And going back then to some of the problems of A.I., to me it’s not unemployment, it’s closer to this — that we’re going to have things that are making decisions, that are doing things, and we don’t understand how they work. And I think it’s going to be a tremendous learning process to do that. The thing to keep in mind, of course, is that we’re somewhat comfortable dealing with humans who have no idea how their brains work either.
LEVITT: I once did a consulting project with an airline and we were trying to work on pricing. And I said, “Well, how do you set prices and they said “We input a bunch of things into this model and it tells us what the prices are.” And I said, “How does it work?” And they said, “We have no idea. We hired this guy from U.C.L.A. and he built this thing and he’s not around anymore. So we just put them in and hope for the best.” And we actually tried to do a pricing experiment, and the prices didn’t change. So in the end, they paid me this big consulting fee, and then we never changed a single price because nobody could figure out how to actually change prices. And I think, writ small, that’s what you’re talking about.
KELLY: What we’re also doing these days is we’re building another set of A.I.s, they’re called explainable A.I.s, to try and explain what the other A.I. is doing. The thing about that is, that is the first step towards consciousness, where you have a reflective intelligence trying to communicate what the rest of the intelligence is doing. Most of the A.I.s that we’re going to deal with won’t be conscious because it’s a total liability. You want them focused. You don’t want them distracted, thinking about whether they left the oven on, or whether they should have majored in finance. But we’ll be able to do it when it’s necessary and useful.
LEVITT: One of the projects you’ve worked on is creating a clock that’s designed to keep accurate time for the next 10,000 years. Can you explain the motivation for building that clock?
KELLY: So first of all, it’s huge. There’s multiple versions of it. One version of it is in the London Science Museum, but there’s a very large version that’s almost complete inside a mountain in west Texas. It’s nearly 500 feet tall, inside this mountain, and it’s a clock that does not run on electricity. Yet, it actually is a calculator and it’s calculating the time and ringing a set of chimes every day at noon that will ring a different melody for 10,000 years. And so the clock is ticking, so to speak, without human attention or input by itself. But the clock doesn’t display the time unless humans go and visit it and turn this turn style and bring up the clock to the current time to display. So there’s some need for humans to visit it. The purpose of the clock is primarily to have people say, “What’s the purpose of the clock?” And begin to think about long-term time. It’s like, if you had a clock that could tick for 10,000 years, what could you do in 10,000 years? What would it see?
LEVITT: 10,000 years is a long time.
KELLY: It’s a very, very long time.
LEVITT: I mean, I think you accomplished your goal with me ‘cause I said, “Well, if you go back 10,000 years, there is agriculture. I don’t think are any cities. I don’t think there’s any written language. We are barely getting going.” So it’s hard to imagine that far forward.
KELLY: And that’s why it was picked. Ten-thousand was roughly the beginning of our civilization. It is impossible to imagine what we would be like in 10,000 years from now. But the point is that by thinking about time and that scale, you have permission to think about in terms of generations. Like, what if we just thought about a hundred years into the future? What if we started something that might not be done in our own generation? And it would benefit people just as we have benefited from the past couple hundred years of people making roads or buildings that we enjoy and use today. If you have a longer horizon, it permits you to overcome even fairly large disturbances. The beauty of compounding growth is that a small amount, steadily compounded, can overwhelm, so to speak, over the long term, even fairly major setbacks and disruptions. That’s sort of the stock market idea, and it’s basically been true So our idea of the clock is to use it as like an icon, a mythic icon, like the picture of the whole earth. When people saw the picture of the whole earth in the early seventies for the first time, there was a sense of, oh, you can’t throw garbage out because it doesn’t go anywhere. It’s all there. You know, there’s a spaceship Earth. It’s a little system that we have to take care of. And it’s sitting in this vastness of space. It’s very fragile and very precious. We wanted something similar, where there would be a way in which this time machine would help us think about time differently.
LEVITT: One of the things that I found so interesting was how you think about if you want a clock to last for 10,000 years, what are the design elements to it? It goes beyond just materials. You got to make sure that people don’t want to vandalize it.
KELLY: Exactly. I did a whole study of time capsules in preparation for this. What a revelation that was. Almost 95 percent of time capsules are lost track of within five years of being buried. There was a time capsule I went to visit at the San Francisco Airport that had been buried in the fifties, I think, or sixties? It was completely lost track of. They uncovered it in construction like in the mid-seventies and then they lost track of it again. And then they discovered it close to when it was supposed to have been open in 2000. And it was just so disappointing because things that people include, they thought were so, so important, were just kinda like no interest whatsoever in them. And the things that we really want to know, they would’ve never thought about putting in. So the best time capsules is actually our garbage dumps. Because it’s saving everything. And that’s what you kind of want is, is all the things that we don’t think are that important. So thinking about long-term clocks, there is a sense in which, how do you not lose track of it? How do you prevent vandalism? How do you keep working the entire time ‘cause it needs some maintenance. What does that look like? And, the most important thing is, how does it remain relevant? We did a study of long-term digital storage and it turns out it’s horrendous. All those floppy disks are unreadable, right? The only way to really maintain information over time is to exercise it. I call it, not storage, but movage. It has to be constantly moving and it has to be cared for.
LEVITT: This clock is being built by a nonprofit. It’s called the Long Now Foundation that you co-chair. And the premise of the Long Now Foundation is that society isn’t thinking long term enough. And I can see strong moral arguments for being good ancestors, especially when it comes to irreversible things — extinctions, the climate cycle being knocked so far out of whack that earth becomes unlivable. But more generally, and especially for somebody like you who’s an optimist, isn’t it also likely that future generations will know much more than we do? They’ll be far richer than we are. And isn’t there also a pretty strong case for just looking out for our own interests and knowing that because of progress, the next generations are going to do fine and we don’t really need to worry about them?
KELLY: Yeah, I think we are really careful not to make hundred-year plans or a thousand-year plans or to plan for the next generation. What we’re trying to do is to increase the possibilities for the next generation. We don’t want to close down those choices for them. We want to open them up and let them decide. And in fact, we are counting on them figuring out things that we can’t figure out. So even the clock, by the way, is left unfinished. There are aspects of it that will be completed by future generations. So it’s not deciding their fate as much as it’s equipping them to make great decisions themselves. I think that distinction is very important. And we often, in addition to talking about long-term responsibility, we talk about long term imagination. That’s the thing that’s the hardest to do, is to try and imagine new choices and possibilities that we don’t have right now. Imagine ways in which we can do things better. Part of my critique about the A.I. folks who are concerned about the end of the world is that they overestimate the value of intelligence. There’s a lot of intelligence guys who think intelligence trumps everything, but most of the great things in the world are happening not by the smartest people in the room. They’re happening with people who have enthusiasm, who have imagination. Smartness and intelligence is one component, but if you put a man and a lion in a cage, it’s not the smartest one that’s going to win. It’s only one part of what we need to make things happen in the world. And the key thing of that is imagination. Imagining what could be, what we’d want, an alternative way of doing things. And that’s not just I.Q.
You’re listening to People I (Mostly) Admire with Steve Levitt and his conversation with Kevin Kelly. After this short break, they’ll return to talk about Kevin’s new book.
* * *
Kevin Kelly is a man of surprises. His latest book is a collection of advice — bits of wisdom he started writing down for his children on his 68th birthday. But as I’ve learned from this show, it’s often that the smartest people give the worst advice. I want to talk to Kelly about how he avoided that trap.
LEVITT: I’ve been making the case that you’re obsessed with complexity. And then you write your latest book, which is the exact opposite. It is a wholehearted embrace of simplicity. It’s a collection of small bits of wisdom that you’ve learned along the way that you wish you had known when you were younger. Were you surprised to find yourself contemplating publishing a book of this form?
KELLY: Yes. It was completely an inadvertent type book. I’d been a fan of proverbs and pithy life guiding advice since Catholic school, where even though I didn’t believe in God, I believed in the gospels. The sayings of Jesus were just like, man, that’s powerful stuff. So I was always collecting quotes and I’d love the telegraphic potency that they have. The way they can kind of expand in your own mind, the way that they compress a lot of knowledge. And the best ones would often have a little twist in them.
LEVITT: I was skeptical of your book. It’s called Excellent Advice for Living. When I started this podcast, I had thought that if I could get these amazing guests on the show, one of the most valuable things I could do would be to get them to give up pearls of wisdom. And frankly, the advice they gave wasn’t very good. So I just stopped asking. But honestly, I really liked your book because it surprised me. It’s not just that the advice itself resonated with me. It’s that the advice, it has a personality. And I felt like I was getting a fascinating window into your life and personality.
KELLY: I take that as a compliment. One of the little bits of wisdom is: “Art is in what you leave out.” And a lot of my time with the book was trying to remove as many words as possible to distill this to its essence by leaving as much as I could out.
LEVITT: I had Rick Rubin on the show recently and that’s his philosophy 100 percent.
KELLY: I’m a born editor. I’m not a born writer. And so the editing part of this is really where I shine. But the other part was, I’m in some ways channeling the ancients. A lot, of the wisdom has been circulating for thousands of years through the stoics and Confucius and the Bible. But I was trying to put into my own words, make it more vernacular and if all possible, to make it unpredictable. Part of the thing with A.I. these days, one of the things I say is like a really worthy goal is to arrange your life or become something where you’re not predictable by A.I. Again, A.I. is a prediction thing. It’s going to try and guess what the next average human would say, and you don’t want to be the average human if at all possible. You want to be you. And that’s sort of what I’m trying to do with this — is to say it in a way that’s never been said before. And that’s hard with things that have been said before.
LEVITT: Daniel Pink, the psychologist, he blurbed your book. He said something like, if you don’t find at least 17 golden nuggets of advice in the book, you’re not awake. And I like that. So I did find at least 17. Can I read a few that I liked and get your reaction on?
KELLY: Oh yeah. Let’s — I like to hear what some of your favorites are.
LEVITT: One of my favorites — and precisely because I believe it 100 percent now, but I didn’t realize it when I was young. And it resonates with something we’ve talked about earlier in this conversation already. So the proverb you — I don’t know, do you call ’em proverbs? I don’t know what the right thing to call —
KELLY: Proverbs, adages, lessons, maxims —
LEVITT: The adage that you create it says: “Being enthusiastic is worth 25 IQ points.”
LEVITT: I love that. It’s absolutely true.
KELLY: This mostly came from our hiring spells at places like Wired, looking at the people that we were hiring. And that trait of enthusiasm, of commitment, of being all in, of being positive, of being encouraging was one of the central things that we were hiring because we were hiring people to make the web and there was nobody who had web-making skills. And so the phrase I used was “hire for attitude, train for skills.” That became what I looked for — one of the things, of course, that you’re looking for — in someone that you want to work with.
LEVITT: So another one I liked because I know it’s true, but I still struggle to live up to it, is: “No is an acceptable answer, even without a reason.
LEVITT: I’m not sure I’ve ever said no to anybody without a reason. And actually I’m taking your adage as my permission to start doing that.
KELLY: I learned that from Stuart Brand, actually, my mentor at Whole Earth, who everybody would tell you was one of the most expert no-ers. He would say no so politely, so forcefully. His no — it didn’t have excuses, but his no’s would always come couched in a way that it was good for you that he was saying no. That was the genius. And that’s how I like to couch it. It’s like, “I’m saying no because I can’t give the kind of attention that your project deserves.”
LEVITT: Okay, maybe the most profound one for me: “We tend to overestimate what we can do in a day and underestimate what we can achieve in a decade.” Can you talk about that one?
KELLY: It relates a little bit to long-term views and optimism, which is: bad things happen fast and good things happen slow. Bad things happen fast because bad things are more probable. Good things, great things are improbable, take a lot more work. Slow progress is still progress. And so you want to raise your time horizon. And as you do, you are able to accomplish bigger and better and gooder things.
LEVITT: What was interesting for me is that — I could not be more aware of the fact that I write down what I want to do in a day, and I never do everything I think I can do. And without thinking about it, I’ve always applied that same logic and assumed that it was true over a year or a decade. But your proverbs slowed me down and I said, “Wait a second. Let me think about 10 years ago.” And I thought, wow, yeah, how great some of the changes have been. And then I thought about a different 10-year span. And it’s interesting that it took a little proverb near the end of your book to open up an insight, which in it’s so obvious once you think to look at it, but I don’t think in my entire life I’ve thought to look at it that way.
KELLY: I’ll take that as a compliment. I am glad that you got that out of the book.
LEVITT: A lot of the messages center around the importance of kindness; of doing things for other people. Have you always emphasized kindness in your life or is that something you only learned along the way?
KELLY: I think I got that from the gospels. I think I got that from Jesus. That was a very prominent injunction that I believed even as a kid. And as I said, even when I didn’t believe in God, I believed in that. And also I believed that because I was told that, but now I believe because I have complete a hundred percent experience. Like hatred is something that is cancerous upon yourself and you want to let go of it because it’s really going to affect you more than anybody else that you’re aiming it at. I have real world experience, life experiences in ways in which kindness is not a weakness; kindness is a strength. And so that sense of trusting strangers, sure, every once in a while, you’re cheated. But that’s a small tax compared to the overwhelming abundance of goodwill that you’ll get from people that you trust. So whereas I had it kind of on faith, literally on faith, now I have it out of experience and I think it’s even wider than it was before.
It’s interesting to me that kindness, as important as it is in society, and as prominent as it is in religious doctrine, is almost completely absent in economics. I never really thought about why that was until the conversation today. I don’t think the explanation is the obvious one that economics assumes people are selfish and therefore we put no value on kindness. I actually think the main reason economic models don’t include kindness is actually far more subtle. So first, let me define what I mean by kindness. I’m talking about little acts that cost you almost nothing to do but potentially have big positive benefits. A friendly smile, throwing your trash in a garbage can instead of on the ground. When you have a full shopping cart at the checkout lane and you’re not in a hurry, letting someone with just one item go in front of you. So here’s the thing, economics is all about tradeoffs. For something to play an interesting role in an economic model, it has to have both costs and benefits. And then the economist’s job is to figure out how to optimally balance these costs and benefits. But the acts of kindness I’m talking about, they only have benefits and no real costs. So economists don’t even bother modeling them; to an economist, it’s completely obvious you should do lots of a thing that has only benefits and no costs. So how does economic thinking actually differ from the Christian scriptures on the subject of kindness? Economists think it’s so obvious that you should be kind, they don’t even think you need to remind people; whereas the gospel, it’s full of reminders. So having known a lot of economists who aren’t very kind, I have to say on this one, I think the gospel wins out over economic thinking.
LEVITT: So now it’s time for our listener question segment. And as always, our producer, Morgan, joins me. Hey, Morgan, how are you doing?
LEVEY: Hey, Steve.
LEVITT: So Morgan, usually you ask me a question first, but this time, I need to apologize for my answer to last episode’s listener question where I just completely botched it. Do you want to remind people what that question was?
LEVEY: So our listener Zane wrote in about access to academic research. Published papers are usually hidden behind a paywall that’s protected by the journal that it was published in, and these journals charge huge amounts of money for subscriptions. And so we were talking about the value of giving access to more people. And at the end of the question, you said that you didn’t think these journals were really making a lot of money, which, to be fair, you were talking about your personal experience as an editor at a University of Chicago not-for-profit academic journal. However, for the for-profit journals, it is definitely true that they are making a lot of money.
LEVITT: Yeah, a lot of money. Billions of dollars and with huge margins. And listeners started writing in and saying, “Oh, no, you got this totally wrong.” And I honestly couldn’t believe the numbers when I actually looked at the profit margins and the amount of money the for-profit journals were making. It was really an eye opener for me. And look, I’m not afraid to say when I made a mistake. And I really got it wrong last week. So I apologize to everyone for that. And I will try to do better this week. So Morgan, what are you going to give me a chance to botch this week?
LEVEY: So, a listener named Juan wrote in, and he wanted to know if the vote from a expert or a scientist should have more weight than the vote of an average citizen, and he came up with this question after listening to our episode with David Keith, who’s a climate scientist and a fan of geoengineering, and he wanted to know if David’s vote should be worth more on climate related issues. What do you think of the idea of having a weighted vote?
LEVITT: So I think it’s really hard to disagree with the principle that the people who know the most about a topic should have more say in how that topic gets decided. I don’t have a particular liking for the idea of doing it through voting. It’s so complicated in so many ways that it’s hard to even think about what you’d do. So what’s the problem? First Juan wants David Keith to have extra weight on climate issues, but we don’t have direct democracy. We elect officials. So when David Keith went to the poll, he would have to cast a vote for whoever it was, and somebody would have to decide that David Keith was more important and more informed than someone else. And he should get 77 votes, whereas Steve Levitt should only get 14 votes, because he’s not as informed. It’s just a complete nightmare; constitutionally impossible and it will never happen and if it did happen, it wouldn’t happen well. But I wouldn’t give up hope about the basic idea because what Juan’s really getting at is that how can we make it so that experts have more say on the topics they know something about and the beauty is that the system already has that built in really strongly. So the amount of influence that David Keith can have through other channels, whether it’s op eds or coming on this podcast or lobbying directly, it turns out that companies and special interests spend way more money lobbying, trying to convince legislators to vote a certain way directly, as opposed to making campaign contributions where they try to get different people elected. And honestly, especially someone like David Keith, a former Harvard professor who’s now a University of Chicago professor, who’s very public facing, who spends a lot of time trying to communicate to the outside world. He has an enormous influence on policy. In fact, I wouldn’t be surprised if David Keith’s opinion is worth 10,000 or 50,000 — 100,000 votes because he can influence so many people either through directly communicating to the public or through lobbying efforts where he talks to legislators to get things done that he’d like to get done.
LEVEY: So we should say that we actually already have a weighted voting system in this country. For presidential elections, the electoral college is a form of a weighted voting system. Though most Americans are actually in favor of abolishing the Electoral College in favor of a popular vote instead.
LEVITT: Now, just to be fair, Morgan. Juan has a sensible weighting in mind. And the Electoral College is not a sensible waiting. It is a semi arbitrary historical fact that nobody in their right mind would go out and put into place now. So Juan is way ahead of the Electoral College.
LEVEY: In terms of experts having an impact through their communication with the public, do you think nowadays it has become the job of scientists to communicate with the public, in ways that it historically wasn’t true for that profession?
LEVITT: I think many more academics and scientists are actively engaged in talking to the public. It’s become easier to talk to the public. There are a lot more avenues for doing it. It’s not that it didn’t happen in the past. Milton Friedman had Free to Choose, which was very influential in the 1970s. I think Albert Einstein spent a fair amount of time talking to the public. Linus Pauling was a Nobel Prize winning chemist who actually won the Nobel Peace Prize as well because he pushed for nuclear deterrence. What I get nervous about is that a lot of scientists use the halo of being an academic or a scientist to talk about things they don’t really know very much about. And that’s dangerous because I’m not sure there’s that much evidence that if you’re really good at being a scientist, you’re also a really great judge of human character or what the stance of the U.S. government should be on foreign policy. So I think that’s where people should keep their antenna — is a scientist talking about his or her science? Or is the scientist using their position to amplify their voice about something that maybe they aren’t really expert in.
LEVEY: Juan, thank you so much for your question. If you have a question for us, our email address is firstname.lastname@example.org. That’s P-I-M-A-at-Freakonomics-dot-com. We read every email that’s sent and we really look forward to reading yours.
LEVITT: One thing that happens a lot and we’ve never made explicit is that listeners write in with follow up questions for the guests. And we always try to get those questions to the guests and have them either respond directly to the listener or do it over the air. So just to make it completely explicit, if you have a question for Kevin Kelly today or any guest in the future, send that question our way and we will do what we can to try to get an answer for you.
We’ll be back in two weeks with a brand-new episode featuring Talithia Williams. She’s a statistician and mathematician at Harvey Mudd College who’s found incredibly creative ways to apply data in everyday life.
WILLIAMS: I started tracking my temperature data. So I didn’t get pregnant. I didn’t want to like, take hormones. I didn’t want to be on birth control. I wanted to find a method that was accurate and effective. And so we learned the symptothermal method and, you know, as a statistician the data geek in me was like, yes, I can (SL^laughs) do a 95-percent confidence interval around ovulation. Like, this is great, this is statistical. It just geeked me out.
As always, thanks for listening, and we’ll see you in two weeks.
* * *
People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and The Economics of Everyday Things. All our shows are produced by Stitcher and Renbud Radio. This episode was produced by Morgan Levey and mixed by Jasmin Klinger. Lyric Bowditch is our production associate. Our executive team is Neal Carruth, Gabriel Roth, and Stephen Dubner. Our theme music was composed by Luis Guerra. To listen ad-free, subscribe to Stitcher Premium. We can be reached at email@example.com, that’s P-I-M-A@freakonomics.com. Thanks for listening.
KELLY: That murder happened just a few blocks from my house where I grew up.
LEVITT: Oh, is that right?
KELLY: John List was a neighbor of ours.
LEVITT: I literally only brought up John List because it was the most prominent example in my mind of someone who’s hard to Google. The fact that you knew the other John List that’s unbelievable. That’s —
KELLY: I thought you were leading this up to that and I kept waiting for the punchline.
- Kevin Kelly, senior maverick and co-founder Wired magazine and co-chair of The Long Now Foundation.
- Excellent Advice for Living: Wisdom I Wish I’d Known Earlier, by Kevin Kelly (2023).
- “Picture Limitless Creativity at Your Fingertips,” by Kevin Kelly (Wired, 2022).
- Vanishing Asia: Three Volume Set: West, Central, and East, by Kevin Kelly (2022).
- “Kevin Kelly: The Case for Optimism,” by Kevin Kelly (Warp News, 2021).
- “25 Years of WIRED Predictions: Why the Future Never Arrives,” by David Karpf (Wired, 2018).
- The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, by Kevin Kelly (2016).
- “That We Will Embrace the Reality of Progress,” by Kevin Kelly (Edge, 2007).
- “The Birth of a Network Nation,” by Kevin Kelly (New Age Journal, 1984).
- Free To Choose, T.V. series featuring Milton Friedman (1980).
- Whole Earth Catalog, edited by Stewart Brand (1968).
- Leaves of Grass, by Walt Whitman (1855).
- The 10,000-Year Clock, The Long Now Foundation.
- The Santa Fe Institute.
- “Rick Rubin on How to Make Something Great,” by People I (Mostly) Admire (2023).
- “Who Gives the Worst Advice?” by People I (Mostly) Admire (2022).
- “103 Pieces of Advice That May or May Not Work,” by Freakonomics Radio (2022).
- “68 Ways to Be Better at Life,” by Freakonomics Radio (2020).
- “The Future (Probably) Isn’t as Scary as You Think,” by Freakonomics Radio (2016).