Search the Site

Episode Transcript

Stephen DUBNER: Hey there. It’s Stephen Dubner. Today on the show, a rare occurrence and a welcome occurrence. We have got a bona fide guest host. This is a person whose name will be familiar to many of you: Adam Davidson. Adam — welcome!

Adam DAVIDSON: Thank you so much, Stephen. 

DUBNER: So Adam, for Freakonomics Radio listeners, you are almost certainly best-known for having — is it co-created and then hosted the N.P.R. show and podcast Planet Money? Is that correct? 

DAVIDSON: Yeah, I sort of had two careers. I had a career doing more human interest, more narrative stories for This American Life and a career doing very straight business stories. And with my buddy Alex Blumberg, who was at This American Life at the time, we thought, well, what if we put them together? What if we made peanut butter and chocolate? You may be the one other person on the planet who can fully identify with that thought. And so we first did this big hour about the housing crisis called “The Giant Pool of Money.” So that led to Planet Money, which Alex and I ran together for about five years. And then I eventually left for the New York Times and then later left for The New Yorker.

DUBNER: Planet Money, we should say, is still alive and very, very well. And you, Adam, as I mentioned, are coming in to guest-host this Freakonomics Radio episode, but not just this episode. This is a three part-series on, essentially, how to think about artificial intelligence. Is that about right? 

DAVIDSON: That is about right. Yeah. And actually, I was in my mind thinking of that “Giant Pool of Money” show we did so long ago, which is, if you remember back in 2008, all these things nobody had been thinking about — the mortgage market, subprime housing, interest rates, the Fed — suddenly it was this massive force that was going to — we didn’t know what it was going to do, but it seemed scary and big. And spending the time to just figure it out, like what is this thing? How can I think about it? How can I make it life-size enough that I can just engage it?

DUBNER: What would you say was the main thing or a main thing that you wanted most to understand about, let’s say the next year or two of A.I.?

DAVIDSON: I would say the fundamental question is, is this time different? Is this just the latest? Or is this a new kind of thing? Certainly in my life, I’m finding a few people are all in on A.I. A lot of people are saying, I don’t know, it seems creepy, I don’t want to have anything to do with it. And I would encourage people — it doesn’t mean you have to love it, doesn’t mean you have to hand your life over to it, but the more people who are involved in thinking about how it should be used, probably the better outcome. 

DUBNER: I would like to think that one good way to get more people engaged in it is to make a three-part series for the show. So I’m glad you did that. And most of all, I’m just so happy to have you playing on our team, so thanks for joining. 

DAVIDSON: Thank you. It was so much fun. I hope that comes across.

DUBNER: Thanks, Adam. 

*      *      *

The thing I want — the thing I’ve been searching for for about a year now — should be simple. At least I think it should be. Like you, like everyone, I keep hearing about A.I., artificial intelligence, and I want to know how to think about it. I want a simple, clear, middle-of-the-road explanation: here’s the deal with A.I. Here’s how to use it. Here’s how not to use it.

But the problem is that the idea of A.I. inspires people to start talking about the future. In extreme ways. “A.I. is the most existential threat to humanity” — serious people say it will kill us all. But, other serious people, they say different things. They say that A.I. is ushering in a new age, maybe a better age, where humanity can achieve things never before dreamed of. It will eliminate disease and poverty, and allow us to live for centuries. I don’t know about you, but I find that my brain sort of shuts down when I hear these huge pronouncements. It will kill us all! No, it will bring about heaven on earth!

I’ve spent months now talking to as many smart people as I can find about A.I., and I learned a lot. The main thing, the big headline: nobody knows where A.I. is heading. That’s why there’s such a crazy range of predictions. As one expert told me, there are no experts yet. We’re still figuring this out. So, over the next three episodes, we’re going to take a little tour through the world of A.I. as it is now. We start, today, with the basics: what is A.I.? Why is everyone talking about it? How does it work? What can it do now? Not what might it do a decade from now. And crucially: what happens when we start asking it to do things we think of as distinctly human?

*      *      *

One major lesson I learned is that the big fears and the big hopes are not really about what we have today. They’re not about OpenA.I.’s ChatGPT or Google’s Bard. This current generation of A.I. — which, as we’ll learn, probably shouldn’t even be called A.I. — it’s not going to kill us. It’s more mundane than that. In fact, all the talk of existential threats and complete transformation is distracting us from the current reality, which is really quite interesting and also plenty confusing in itself.

Have you played around with ChatGPT or any of the other A.I. tools? I have; a lot. And I’m continuously struck by two experiences. One is that it can seem magical. I ask it to do something — write a sonnet about basketball; write an essay about the history of farming; whatever — and the A.I. generates words and sentences and full paragraphs and it seems impossible that some computer software is creating all that. But the other experience is that those words it generates are a bit off. They’re weird. No person would write them.

That has become my obsession — not just mine; it seems to have captured the world’s attention. Is A.I. becoming human? Or is it altogether something else? I wanted to try to get at that by asking a really simple question. Can ChatGPT be funny? Can it tell a good joke?

Lydia CHILTON: Almost. I don’t think it’s as good as people yet. 

That’s Lydia Chilton. She’s a professor of computer science at Columbia University.

CHILTON: All it knows how to do is, from a sequence of words, predict the next one. So if you say, tell me a knock-knock joke, what would you as a human being predict the next word would be? It would be “knock, knock, who’s there?” And how did you know that? Well, because you’ve heard it many, many times before. And you don’t even have to know what a knock knock joke is to do that. You just follow the patterns.

DAVIDSON: Because the software that is behind ChatGPT, as I understand it, is not looking at words, it’s just looking at numbers. So “knock” would be translated into a number. “Joke” could be translated into a number, and then it’s just doing a bunch of math on when this number is near this number, then this other number comes up a lot? 

CHILTON: Computers at the end of the day really only know how to operate on zeros and ones. They add them together, they subtract them from each other. That’s all they do. But even with just zero and one, you have to figure out how to represent the number two. And with those numbers, I can also represent words. I can line all these up and actually say, if someone has typed A, what’s the most likely letter they’re going to type next? It’s like the dumbest thing you could possibly do, which is great, it’s one of my loves of computer science. You take something really complex and make it so simple that a computer could do it. 

Imagine setting out to write the rules of being funny. You could probably just about do it with knock-knock jokes. Rule 1: You say, “knock knock.” Rule 2: The other person says, “who’s there.” Rule 3: You say a word that kinda sounds like another word. You see where this is going. But when’s the last time you actually laughed at a knock knock joke? If you can write clear rules for how to generate a joke — it’s probably not a very good joke.

That is, essentially, why Lydia Chilton gave up on A.I. the first time she looked at it, which was about 15 years ago. When she was in graduate school, she tried out the cutting edge of A.I. at the time. It now goes by a phrase I love: GOFAI — that’s G-O-F-A-I — Good Old Fashioned A.I. You can think of it as rules-based A.I. Coming up with super-complicated rules to achieve some outcome.

CHILTON: So GOFAI, good old fashioned A.I., is just all the things that we did pretty much before the internet. GOFAI had this vision of allowing computers to see. And sort of the method was, let’s take a picture of a human face and break it up into features. Here’s the one eyeball, another eyeball, nose, a mouth, and then test that against all the other eyeballs that are in the database to identify: This is Adam. This is Lydia. This is Barack Obama.

This is the classic model of a computer program: you give the computer a series of rules, and it follows the rules in sequence and spits out a result at the end. To GOFAI, recognizing a face, telling a joke, diagnosing a disease, coming up with the fastest route to your mom’s house, whatever task you have in mind — it’s all just a series of really complicated rules. So, the A.I. researchers would try to write more and more complicated lists of rules.

CHILTON: And that just didn’t work. At least not very well. There’s two reasons. The computers just weren’t powerful enough. Turns out this does kind of work, but you just need a lot of examples of what eyeballs look like and what everyone’s eyeballs look like to make that work. And unless someone’s going to sit there and type in everybody’s eyeballs, it’s just not going to happen. So it’s really like it was a good idea, but the scale wasn’t there. 

Of course it didn’t work. The human brain evolved to work in a way quite different from GOFAI’s long list of sequential rules. Our brains don’t start with a bunch of rules. They start by taking in sights and sounds and smells and the rest — and then they build connections among neurons which prepare that brain to interact with the world it finds itself in.

CHILTON: It’s sort of this illusion that computer scientists were under that if I write down enough rules, I can describe a cat, or a table or anything. But it turns out it’s really hard to write down those rules. You try “table,” you know, it’s got a flat bit and then some legs. Four legs. Oh, but some of them have two legs. Oh, but some tables fold. And so then they have no legs. And the world just doesn’t break down in this categorical sense. And guess what? That’s not how people learn either. We just bumble around as newborns and toddlers and see a bunch of stuff and kind of figure it out. And those toddlers have a lot of data, and so if computers could have that data or even much, much, much more, maybe they’ll just figure it out on their own with the right information architecture, which is neural networks, it’s just taking in all this data and trying to predict, is that a table? Is that a table? And it doesn’t have to conform to hard rules.

The reason you are hearing all about A.I. now, the reason it is getting so much attention, is that A.I. researchers shifted from Good Old-Fashioned A.I. — the long list of rules — to what Chilton just mentioned: neural networks designed to be more like the human brain. The A.I. software is made up of a huge network of nodes designed to simulate the brain’s neurons. You feed this A.I. tons of data and let it form the connections.

Interestingly, this neural network approach has been around for a long time. It was first proposed in 1943 by two researchers at universities in Chicago — a neurologist and a logician. But it wasn’t until pretty recently that computers were fast enough, with enough memory, that those neural networks fully took off as a powerful tool.

Also, for decades, researchers had a problem: a neural network needs a ton of data. If you want it to be able to identify a table, you need to show it a lot of tables. If you want it to predict how human beings communicate, you need a lot of examples of human beings communicating. And you need those examples to be in a form that computers can read. And for most of the twentieth century, there was just not that much stuff.

CHILTON: Then the internet happened, and people just started dumping information. There’s, like, probably 100,000 photos of me even on the internet, and of everyone. We all just gave away all our personal information. And so this amassing of data, not just of facts, but of people and personal experiences and thoughts, really created the trove of information that we needed to train these algorithms, rather than trying to engineer rules and figure it out, because there’s just too many rules.

In all that information we dumped on the internet — all those blog posts and Instagram stories and angry comments, as well as movie scripts and just about every book ever — we gave neural networks a ton of examples of our faces and our experiences and our thoughts. Back to humor: if you’re a computer connected to the internet, it’s very easy to find examples of people being funny, and of people trying to be funny and then being told whether or not they actually are all that funny. So, after giving up on rules-based A.I., Lydia Chilton decided to give neural network-based A.I. a chance. Because she had this obsession: what makes something funny? And can I make a computer be funny?

CHILTON: Well, I’ll be honest. One thing you get to do in computer science is overanalyze things that you find fascinating but are not good at. And that was me. It’s a power to be able to tell jokes, good jokes.

DAVIDSON: And did you not feel like you were good at it? 

CHILTON: No. I would say most of my humor is a little bit unintentional. I would say certainly for myself and maybe other computer scientists feel like understanding people is a real challenge. For me, it does not come naturally. And so I like studying it so I can understand these things, so I can feel like a normal human that understands other people. And humor is a big part of that, and always just felt like this nut that I could crack.

We know computers can do math well; we know they can store a ton of data. But humor — making another person sincerely laugh out loud — feels so human.

CHILTON: People have this intuition that a computer can’t be funny because it doesn’t have emotions. And that is a challenge. But there are actually ways that A.I. can get around that. The main way it gets around that is by simulating those emotions. But we all simulate emotions as well. You can do it without feeling it; so can a machine. And it learns from patterns, just like you did. 

DAVIDSON: But the best humor is really surprising. That’s the fun of the humor, like you never would have thought that person would have said that. So is that also just following rules? 

CHILTON: There’s this sort of myth out there that creativity is somehow magic, and jokes are one of the most creative things. They just come out of nowhere and they don’t follow patterns. And it’s really even hard for a person to do, unless you’re like Mozart, Picasso, Shakespeare, Einstein, someone like that, that you’re not going to come up with something super creative. But it turns out that creativity is not that hard. It’s just a lot of hard work. And you always lean on patterns. The trick is that humor has that, this structure beneath the surface, like a plot, like a chord progression. But what it really is: is it’s violating expectations in a very particular way.

Chilton and her collaborators did eventually get a computer to make up a joke — not a great joke, but a joke. Here’s how they did it: They focused on the American Voices section of The Onion, the humor website. In American Voices they take some topic from the news, and then have a few fake person-on-the-street reactions. It’s a classic set-up/punchline. But here there’s one set-up, and three punchlines.

This is great for a computer-science researcher. American Voices, which was originally called “What Do You Think?,” has been around since at least the mid-90s. So: 30 years, 50 setups a year, three to six punchlines. That’s thousands and thousands of jokes with exactly the same structure. All that data allowed Lydia Chilton to come up with a series of 20 steps — she calls them microtasks — that a writer goes through to make a joke. For instance, if you are given a headline, first identify all of the elements. In her paper, she looks at a headline that says “Justin Bieber Baptized in New York City Bathtub.” Task 1 would be to identify four elements: there’s Justin Bieber, there’s baptism, there’s New York City, and there’s a bathtub. Then, Task 2 would be to figure out what people would normally expect from such a headline. And then, this is where the humor comes in, you subvert that expectation.

So, with all of her structure and all of that data, how does A.I. do? Chilton tried one for us.

CHILTON: Okay, I like these because I have toddlers. So the real headline is “Ten-year-olds found working at McDonald’s until 2 a.m.” A.I. says, “Talk about commitment. I can’t even get my ten-year-old to finish their vegetables.” Another one, “Well, that explains the finger painting in my Big Mac box last night.” Another one: “Finally, a solution to the never-ending debate of homework versus real-world experience.”

Eh. The punchlines are not great. A.I. seems to understand the overall structure of a setup/punchline kind of joke. But it’s struggling to make those punchlines actually funny, actually work as jokes. Or even always make sense. Though, to be fair, a lot of us human beings struggle with that. Which made me wonder how this looks from the perspective of an actual funny person.

Michael SCHUR: My name is Michael Schur. I am a television writer and producer based in Los Angeles.

Michael Schur is one of our era’s most prolific and successful creators of TV comedies. On his own or with others, he created Parks and Recreation and The Good Place. He was a major force behind Brooklyn Nine-Nine and the American version of The Office, and is an executive producer of the show Hacks. I love all these shows. I think I’ve seen every episode of television Michael Schur has had anything to do with. At the moment, he’s on the negotiating committee for the Writers Guild of America in their ongoing strike. He told me he has dabbled in ChatGPT.

SCHUR: I will say that I am generally averse to it, because I know that by playing around with it, you’re helping it learn stuff, to some extent. And as a longtime fan of science fiction writing, I don’t want to contribute in any way to the rapid advancement of these tools. So I have tended to shy away.

DAVIDSON: I mean, that’s actually an interesting moral question, like as someone with some influence in your industry, I’d certainly understand the position of, let me not participate, let me not encourage it, which makes a lot of sense. But on the other hand, maybe, let me understand what this thing can do so I can better represent my community or something like that? Do you see a tension there? 

SCHUR: Yeah, I do. And I do understand that at some level, both as a writer and as a member of the negotiating committee for the W.G.A., it is probably part of my job description to understand these things, know how they work, play around with them, that sort of thing. But I’ve also — I think I get it, you know? And I kind of don’t want to encourage it. 

Schur’s approach to A.I. — to not use it, to hope it goes away — made me think of the Luddites, the movement of British textile workers who smashed factory machinery in the early 1800s, because, of course, they’ve become the go-to historical analogy for anyone who resists a new technology.

But the real story of the Luddites is a bit more complex: The Luddite movement was made up of highly skilled textile workers, the very people who were most familiar with the new industrial technology. They weren’t against the machines. They were against the way the factory owners were using those machines. Factories were using the technology to make inferior products — and, in the process, destroying the pipeline of skilled textile workers. And in that sense, Michael Schur is almost exactly a Luddite. He is not so much worried about the technology itself. He is worried about how industry will use that technology to weaken the power of writers. He is also worried that those studio execs don’t even realize that they are being self-defeating. If they damage the current comedy-writing ecosystem, they might find themselves without anyone who knows how to be funny, professionally and reliably. Let’s take Schur, for example. He got a job at Saturday Night Live when he was fresh out of college.

SCHUR: I was extremely bad at the job for a good long time, and by all rights should have been fired. But I eventually figured it out through observation. The head writers at that time were Tina Fey and Adam McKay, and I had good friends who worked in the show, Dennis McNicholas and Robert Carlock, and I just decided to be a sponge. I just decided to say like, okay, I’m going to watch these folks. I became a forensic scientist. I would look at their sketches and I would break them down and I would try to understand what made them good and what made them successful. And eventually, through a combination of observation and genuine mentorship, I kind of got to the point where I could do the job.

Schur and his fellow W.G.A. members are striking right now for a bunch of reasons, but one big one is A.I. Specifically, the writers don’t want studio executives to be able to use A.I. to supplant writers as the creator of a new idea for a movie or TV show. The way Hollywood works is that writers have the most power and make the most money when they generate original ideas, like Schur did with The Good Place. What Schur and the W.G.A. fear is that executives will ask A.I. to generate a bunch of ideas for TV shows and movies and then hire writers to flesh those ideas out into scripts. There is no A.I. program that can actually write a ready-to-shoot full script, at least not yet. But A.I. can generate a ton of ideas, and at least some of them might be usable. If A.I. creates the original idea then the writer is just a hired gun — which means that more of the rewards of the show’s success accrue to the studio.

 SCHUR: The thing that we’re fighting for here, very simply, is the concept of writing being a viable career. It’s never been remotely this hard for young writers to move to L.A. or New York and begin a career and then sustain that career. And I have watched as what was already a difficult path has become nearly impossible. And that is essentially why we are fighting this fight, because if it doesn’t change, if we can’t make it more sustainable it’s going to stop. People will just decide that writing falls into the same category as being a professional basketball player. Like, I love basketball, but I’m not making the pros, so there’s no point. And that would be a real shame. We would lose out on a lot of great stories and a lot of great brains and hearts and souls of people who have something to say.

For Michael Schur and the W.G.A., this is existential. It would mean a near-total collapse of the career of writing for movies and TV shows. And that fear is about the current generation of A.I — the one that cannot yet write a full script. A.I., of course, is getting better all the time.

SCHUR: My fear is that even if these machines and programs only ever get really, really good at doing the thing that they do, which is predictive text, that they will still at some point with enough data and with enough computing power, get to the point where they could accidentally stumble into something that might look enough like a genuine human idea that people wouldn’t really care one way or the other. And that’s what honestly worries me, is the idea that it will be so good at imitating or predicting based on its vast reservoir of existing knowledge that people won’t really be able to tell the difference when it generates whatever it generates.

Obviously, if you write for a living, the idea of A.I. writing as well as you is pretty worrisome. But what about the rest of us, who don’t write TV shows — but, we consume them? What would it mean if — as Schur says — A.I. just keeps getting better at predicting things?

SCHUR: That’s the thing that keeps me up at night and haunts me and makes me feel like there’s something very, very dangerous that is right around the corner

That’s coming up.

*      *      *

I am on a journey to figure out how I should think and feel about A.I. and its place in our society — a way that doesn’t have the panic or the excitement cranked up to 11. So, I knew who to call.

Joshua GANS: I’m Joshua Gans. I’m a professor of strategic management at the University of Toronto, and I guess I’m an economist for a living. 

I’ve been turning to Joshua Gans for years for exactly this sort of thing. There is some exciting new trend, and everyone is freaking out. What’s a calm, grounded way to understand it? Joshua Gans will know.

GANS: I call that process de-sexification.

DAVIDSON: Meaning like we’re taking something really exciting and brand new and how can we make it boring and predictable and like a lot of other things? 

GANS: Exactly. That’s my mission in life. 

Gans has co-written two books on A.I.: Prediction Machines in 2018, and Power and Prediction in 2022. He also runs a program at the National Bureau of Economic Research on A.I., through which he’s written and edited a ton of smart papers on the subject. Nearly everything he writes includes that word: “prediction.” He says the best way to understand the economics of A.I. is to think of it as a process that reduces the cost of prediction.

GANS: And what is prediction? Prediction is taking information that you have and turning it into information that you need. For instance, when we predict the weather, we’re taking information of historical weather trends and other things going along at the moment, and we use it to turn it into information we need, which is a forecast. Not to say that these predictions are perfect. They’re just better than what we have to make decisions in their absence. But the big leap was turning things that we didn’t normally think of as a prediction problem, realizing they were a prediction problem, and then applying these new methods of statistics to solve it. 

Let’s step back a moment. Earlier, I mentioned that artificial intelligence is not the right term for the current generation of what we have all come to call A.I. The word “intelligence suggests that there is some active process of thought. But that is not what ChatGPT or any program is doing. All it is doing is taking in information and using a lot of mathematics to predict what information comes next. The pros call it “machine learning.” There are no words or pictures or sounds, there are only numbers. Words are turned into numbers. Pictures into very long numbers. Sounds into numbers. And then the A.I. does math. It’s not even very complicated math — each step is fairly straightforward. It’s just that the software does a lot of math, a lot of linear algebra equations, over and over again. So, after being trained on a ton of joke setups from The Onion, say, the A.I. can use math to more accurately predict what is likely to come next in the punchline.

So let’s get back to writing. In December of last year, the Harvard Business Review asked Gans and his book co-authors to write an essay about ChatGPT. The team got together and hashed out some big ideas and some key insights they wanted to put in the essay. And then Gans was given the task of turning those rough notes into an actual finished work.

GANS: What I did instead is I looked at and that said, ah, I wonder what happens if I just put in the notes that we have into ChatGPT and say, ‘Write a 700-word piece describing these things at the level of an MBA student in terms of reading and terminology.’

DAVIDSON: So pretty low. 

GANS: Pretty low. Exactly. And so I did that, pressed enter, and out popped exactly 700 words. We did some light editing, so I’d say about 10 percent of it was altered, and off it was in the Harvard Business Review, and people read it and found it interesting. We put a note at the bottom saying we’d used ChatGPT for this purpose, because it was so new, that seems appropriate to do so. And so you look at that and you say, “Oh, well, why was I even necessary?” And it’s true, I saved myself an hour worth of time doing something that we normally call writing. But let’s think about that whole task. What really happened? The task of writing was now decomposed into three things: the prompt, the actual physical churning out of the words, and then the sign-off at the end. And then when you step back from that and say, “What was the important part of this that makes it worthwhile to read?” It’s not the writing in the middle. It’s the prompt and it’s the sign-off at the end. It is not that all of a sudden you can’t write or what you’d done is not valuable. What that means is that anybody, even if they can’t string a few words together, can prompt ChatGPT to churn out their thoughts and then read it and sign off on it. There’s this potential for a great explosion in the number of people who can participate in written activity. And that’s the change that’s going to come from this. 

Because, to be fair, a lot of human-generated, everyday communication is not great. Think of PowerPoint presentations you’ve sat through, or memos from your colleagues, or the instructions to some new gizmo you bought. We are inundated with communication that doesn’t meet the basic hurdle of being clear, comprehensible. To a professional writer, A.I. that is good at writing sounds like a threat. But to a lot of other folks — people who have to communicate but aren’t great at it — A.I. might be a solution.

Joshua Gans made me feel a bit calmer about A.I., a bit more settled. I can see why some people are afraid of it and others like it. But then I remembered a part of my conversation with Mike Schur.

SCHUR: That’s what honestly worries me, is the idea that it won’t actually be creating a new idea, but it will be so good at imitating or predicting based on its vast reservoir of existing knowledge that people won’t really be able to tell the difference.

By its nature, A.I. is backwards-looking. It looks at whatever it is fed and then it uses that stuff to make predictions. So what happens if most of the writing we have was produced by A.I., and then, if that A.I. is being trained on all that A.I.-written stuff to write more stuff? If our TV shows and movies and essays and articles are all created by A.I. and then are used to train A.I. to write more of the same? Think of the funniest thing you’ve ever seen. Your favorite book or movie or TV show. That thing that surprised you, that came out of leftfield and just blew you away. For me, I instantly think of Monty Python, or watching Spinal Tap. Or seeing Ali G. and the U.K. version of The Office for the first time, and the movie Stepbrothers. Your list may be different. But you have one, right?

SCHUR: When you’re talking about the relationship that audiences have to the art form, what you’re really talking about is, can you reach through the screen and grab someone by the lapels of their jacket and shake them a little bit and make them see the world differently or make them understand themselves differently? And the A.I. piece of this, to me, is giving up on that concept. It’s saying that’s not the goal anymore. If we go down that road, I don’t think we can ever come back. I don’t think that there will ever be space for the better version of the art form to break through because the world would be so cluttered with garbage and dreck and the slurry of other shows and movies that has just run off into a processing machine and been spit back out in a new shape and form, that there won’t be any room for the good stuff. That’s the thing that keeps me up at night and haunts me and makes me feel like there’s something very, very dangerous that is right around the corner from where we’re standing right now. 

DAVIDSON: You’re supposed to be the funny guy.

SCHUR: Well, there’s nothing funny about this. That’s the problem, man. You know, you think I want to be walking in circles for four hours a day and talking about the death of the art form?

Much like the Luddites, who saw a flood of inferior, machine-made textiles replace the higher-quality, more expensive stuff made by hand, Schur pictures a world of A.I.-driven dreck. Middle of the road stuff, produced by a prediction machine. A machine that predicts the most likely-to-satisfy answer. Not the single, very best, most amazing thing. No. The average. The middle of the road. So, yes, Michael Schur is right. If all of our comedy was written by A.I., we would probably only have what I think young people call mid: middle-of-the-road, derivative comedy. And, let’s be honest, a lot of human-written comedy is pretty derivative. Pretty middle-of-the-road. But people, at least some people, do want that grab-you-by-the-lapels experience, that new thing that is fundamentally unlike anything that came before. For now, that requires human beings.

Okay, so: if creativity is what human beings can offer that A.I. can’t fully replace, it’s pretty important to our economic future. In which case, we should probably know what creativity is. Which is easier said than done.

Daniel GROSS: There has been, in my view, in the economics literature, kind of an abstraction away from the individual and that individual act of creativity.

That’s coming up.

*      *      *

Economists sometimes have a hard time talking about creativity. Although one exception is Dan Gross from Duke University’s Fuqua School of Business.

GROSS: It’s this ephemeral thing. There isn’t a broad consensus on what this even is, let alone what a good way to measure it would be.

You want to get an economist excited — tell them there is some vague thing that can’t be measured. They’ll obsess over how to measure it. Creativity is a deep issue for economics. As you’ve heard on this show many times, economic growth — where more people have more of their needs met — most often comes from innovation, from the output of creativity. That could mean a new technology, or a new TV show. They’re both bringing something into the world that wasn’t there before.

Some societies and some moments in history produce a lot more creativity than others. Economists want to understand that. So they look at the kinds of things economists pay attention to: property rights, population density, interest rates. They don’t usually look much at individual people.

GROSS: Partly this is because of the tools that are available and the data that are available. There has been, in my view, in the economics literature, kind of an abstraction away from the individual and that individual act of creativity. And that’s what I decided I wanted to try to get a little bit more insight into.

Gross happened upon something economists love: a natural experiment. A real thing happening in the world that would generate the data he needs. Not something I would have thought of: online logo design competitions.

GROSS: This work that I did in graduate school, it was studying how competition affects creative production. And in particular, it was examining design competitions where you have individual designers who are competing for a fixed prize that has been posted by a sponsor. Typically a small business that’s in need of a logo.

DAVIDSON: So I’ve done these, by the way. It’s kind of awesome. I had a small podcast production company and we just went on this site and explained what we wanted, and suddenly we had hundreds and hundreds of options.

GROSS: And so let me tell you how this really worked in the setting that I studied. The principal mode of feedback was one- to five-star ratings. So, this design got three stars, this one got one star. 

The designers can see the ratings on their own work. They can’t see what ratings have been given to specific designs by other people, but they can see the overall distribution of ratings.

GROSS: They can see, okay, you know, somebody out there seems to have a winning idea because there’s a five-star floating out there somewhere, and then they can think about what that means for them. 

What do you do when you get a three-star rating for your design, and you know someone else has five stars? You know you’re not getting the gig, you’re not winning the award, unless you do something different. Do you go for broke, try something wild? Do you get even more conservative and do something classical but boring? Or do you just quit? Gross was able to peer into this carefully controlled space to get a real sense of how people respond.

GROSS: What I’ve found here is that when a designer gets their first five-star rating, they’ll really transition from trying out different ideas to just iterating on the one that you’ve rated highly. And that’s especially the case if they don’t have any high-rated competition that they’re aware of. On the other hand, if they’re aware that there is other high-rated competition they’ll then be induced to actually revert back to experimenting a little bit more.

DAVIDSON: So with competition, creativity goes up. But you know, spoiler alert, I did read the paper, so I know there’s another part of this story. 

GROSS: Let me tell you about the twist here. As a contest gets very crowded, so if there are a lot of high-performing competitors, these individual designers, their incentive to keep investing more effort, to keep putting more in, to trying to make their designs better, that starts to go down. Because essentially in a crowded field, it becomes a bit more of a lottery. The chances that that incremental effort are going to really yield some return for them, they really shrink to zero because you have a lot of other good contenders out there. The odds that you’re going to slip by them start to become smaller and smaller, even if you have a good idea. And so crowded competitions actually discourage effort. They might actually drive these designers to just stop participating.

What Gross is saying is that there’s this magical Goldilocks zone, where there’s just enough competition to get the largest number of people to step up and do more creative work.

GROSS: The story of the paper in a nutshell, is that too little competition, you don’t get a lot of variation. Too much competition, you don’t get a lot of effort. And it’s somewhere in the middle where at least incentives to be creative, to produce novel work, to just come up with new stuff seem to be the highest. 

A.I. means that there will be essentially infinite competition for that creative middle of the road. A.I. can produce so much work in that space that it probably makes sense for people who can only create middle-of-the road work to bow out, let the A.I. do it. But Gross’s study contains a warning: when the contest gets crowded, it’s not just the middle-of-the-road folks who stop competing. It’s everyone.

One big question: Will A.I. always be stuck in the middle of the road? Or can it generate new ideas, new forms of writing, new ways of creating art or telling jokes? Is A.I. fundamentally different from us, or is it just early on its journey?

CHILTON: There are some similarities between what people do and what computers do. 

That’s computer scientist Lydia Chilton again.

CHILTON: Certainly both rely heavily on examples. The more examples you look at and analyze, you usually get better at your craft. No one is born knowing how to do these things. We’re all learning from examples. The computer really is trying to simulate aspects of human experience, but there are some things like if you can’t actually feel it, you don’t have what we call ground truth data. You don’t know what’s real. You’re only seeing part of it. You’re only sort of guessing. And I think we’ve all been in experiences where like “I don’t really know what’s going on, but I can kind of guess.” And so that’s what the computer is. It’s just guessing, but it’s seen enough data that it can guess correctly often enough.

DAVIDSON: I feel like I want humans to win and — I would love it if you said there’s something fundamentally human that computers will never be able to do.

CHILTON: It’s hard for me to separate what I think will happen from what I want to happen, and nobody knows. Here’s what I want. What I really want is to show people that these things like creativity that we think are mysteries, it’s not a mystery. You can do it. Now, in this process, if I have accidentally enabled the machine or helped the machine in any way do better than people, I’d be like, “Oh shit, maybe I shouldn’t have done that.” This is a classic computer-science thing when we’re so excited about just showing the computer can do it, maybe we should have thought whether it should do it.

DAVIDSON: Your job is not really fundamentally to figure out what are the implications. Your job is to advance science and to teach science, right?

CHILTON: That’s what I’m good at. I’d say I’m not that great at the other thing. 

DAVIDSON: At thinking through the implications. 

CHILTON: I try, but I have to admit I get a little bit stuck. I’m so caught up in the idea of understanding this process, and I do really think — it’s hard for me to think of, okay, computers can make jokes, like, what comes next?

The question about humor is really a question about humanity. Are there things — valuable, important things — that only humans are able to do? If there are, then the answer is clear: people can thrive so long as they focus on the human stuff; let A.I. do whatever it is that A.I. can do. But if we learn that there are no things, or very few things, that humans can do better than A.I., then our position is a lot more confusing. What is our role in a world where we’re not needed? We’re not there now. That’s not today’s issue. But it could come soon.

Ajeya COTRA: GPT-2 was roughly the size of a honeybee’s brain and it was already able to do some interesting stuff. Now I think GPT-4 is roughly the size of a squirrel’s brain, last I checked. So we’ve moved from honeybee to squirrel, and I was trying to forecast when would it become affordable to train the human brain?

How long will that take? And what will it mean for humans like you and me? Next week on Part 2 of our series, “How To Think About A.I.,” we’ll answer those big questions, and a few others, including: is A.I. coming for your job? And if so, what can you do about it?

*      *      *

Freakonomics Radio is produced by Stitcher and Renbud Radio. This episode was produced by Julie Kanfer and mixed by Eleanor Osborne, Greg Rippin, Jasmin Klinger, and Jeremy Johnston. We also had help this week from Daniel Moritz-Rabson. Our staff also includes Alina Kulman, Daria Klenert, Elsa Hernandez, Gabriel Roth, Lyric Bowditch, Morgan Levey, Neal Carruth, Rebecca Lee Douglas, Ryan Kelley, Sarah Lilley, and Zack Lapinski. Our theme song is “Mr. Fortune,” by the Hitchhikers; all the other music was composed by Luis Guerra.

Read full Transcript

Sources

  • Lydia Chilton, professor of computer science at Columbia University.
  • Joshua Gans, professor of strategic management at the University of Toronto.
  • Daniel Gross, professor of strategy at Duke University’s Fuqua School of Business.
  • Michael Schur, television writer and producer.

Resources

Extras

Episode Video

Comments