My guest today, Fei-Fei Li, is a professor of computer science at Stanford University. Her early research led to massive advances in computer vision, one of the most important subfields of artificial intelligence.
LI: Not too many people were betting on what I was doing for sure.
Welcome to People I (Mostly) Admire, with Steve Levitt.
Fei-Fei Li continues to produce breakthroughs in computer vision research, but as a co-director of Stanford’s Human Centered A.I. Institute, she now spends much of her time trying to tame A.I., working to steer the technology in directions that will benefit rather than harm society.
* * *
LEVITT: Artificial intelligence is all the rage these days, but that is such a contrast to the early 1990s when I was in graduate school. I remember the consensus at that time, at least in the circles that I ran in, was that a bunch of the smartest people on the planet had wasted a lot of time back in the 1960s and ‘70s and ‘80s on A.I. And if I had said I wanted to pursue A.I., everyone around me would have tried to talk me out of it. So you’re probably lucky to be younger than me, so that by the time you came of age, there was more optimism about A.I. But did people try to talk you out of that career path as well?
LI: You know, here’s the truth. I didn’t ask people. So I started my Ph.D. in the year of 2000 and I didn’t start it for artificial intelligence. I started in computer vision and machine learning, and then I entered grad school and I realized, “Oh, there is another name to this field called A.I. and I like it, so here it goes.”
LEVITT: I think it’s probably fair to say that some of A.I.’s greatest successes and practical applications to date have been in the area of computer vision, and in no small part because of your own contributions. And one of the many things I loved about a book that you just wrote — it’s entitled, The Worlds I See — is how beautifully you lead the reader through your field’s many dead ends and struggles, which is unusual. So often we hear only about the triumphs, the AlphaZeros, the ChatGPTs — not the many, many failures. I’d love to hear you talk about some of those struggles. Could you paint a quick picture of the state of the art in computer vision just before ImageNet came to be?
LI: So my field, computer vision, simply put, has to deal with seeing. You know, we walk around the world as visually intelligent animals. We not only see the colors and the shades and the shapes, but we really see meaning, right? You walk around and you see, “There’s a cat, there’s a bird, here’s my computer,” and “Where’s my key?” and “I have to be looking for it.” And beyond seeing meanings, you do things because you see. You make an omelet in the morning, you drive around. So seeing is deeply deeply in intelligence. So that’s what my field is about. Before 2007, computer vision is a very niche field within A.I., trying to solve some of the more fundamental problems, such as seeing things. I remember the whole world was working on a data set of 20 different objects — cows, airplanes, beer bottles, I remembered.
LEVITT: And then the challenge would be: could you write a program that, when shown an object, having been trained on cows and beer bottles, could tell you whether it was a cow or a beer bottle? Is that the problem that you’re trying to solve?
LI: Exactly. Show a picture that, Steve, you take and upload it to the computer and ask, “Is there a cow in it?” It can say, “There is a cow.” Sometimes if you push it further, it can even say, “Well, here’s the box that the cow is in.” And that’s the problem. And it is a fundamental problem that nature has solved. That’s why we’re such powerful seeing animals. But we were far from solving it in the world of computers in the early days of 21st century.
LEVITT: So what you and other researchers would do is you’d take these data sets that had 20 different objects and you would try to program up algorithms that, using those data, if you took them out of sample and now I showed you a new picture, would try to do its best to identify one of those objects. But the recognition among researchers was, look, we’re just getting started. We can’t do this nearly as well as humans. We’re just learning how to take baby steps. Is that a fair assessment of what people would have said at that time about computer vision?
LI: Yes and no, believe it or not. On one hand, we knew it was far from working in the wild, but there was also one school of thought, that is: “20 is even too many. Let’s just work on one object. Once we really get that one object to work, we will know how to get all of this to work.”
LEVITT: So it’s interesting you talk about this approach of, “If we could just really learn how to identify a cat” — cat’s a complicated animal because it moves around, and they look really different, there’s kittens, all sorts of things about cats. And so people thought, “Let’s just get really good at cats, and then once we’re good at cats, anything will follow.” Now you had completely the opposite hunch, and you went very much against the grain. So what was your view at the time?
LI: Gradually, around 2006, I realized we are all more or less obsessing with a couple of families of machine learning or statistical models. And then I start to really question that because this is not how nature goes. Nature is so complex. Nature is diverse. Nature is also big data. I also read psychology literature, and this is not something that every computer scientist do. My research has always been somewhat interdisciplinary. And I noticed that there was one psychologist called Irving Biedermann — in the 1980s, he wrote a paper and had a back-of-envelope estimation of how many objects babies see and he puts that estimate to 30,000.
LEVITT: Like different categories.
LI: Yeah. But that really bothered me in several ways. It bothered me philosophically because I feel we should solve real NorthStar problems that get us to intelligence. Twenty versus 30,000 doesn’t look right. Second thing that bother me is if you want to train a more complex model, you need more training data. That’s just a mathematical rule. Yet, we are trying to make complex models that we think can recognize the world with a small amount of data, you know, a few thousand images that capture 20 objects. That just doesn’t look right, just didn’t compute well with me.
LEVITT: Just to make sure I understand, so are you saying that at the time, among researchers, there was a whole lot of emphasis on the algorithm, on the code that would interpret the data, but not enough emphasis on the data itself and the importance of the data in this process?
LI: Right, and because of that, we took a turn, and said, “We’re going to bet on data because mathematically we think that’s where the missing piece is.” So that’s the beginning of a new project back in 2007 that most people didn’t believe in.
LEVITT: So tell me, what is ImageNet, and how hard was it to pull it off at that time? How long did it take you? How many person hours?
LI: ImageNet is a dual purpose data set. The first purpose: for solving the fundamental computer vision problem of object recognition, which is fundamental to human intelligence, it’s fundamental to A.I. At this point you have to go beyond 20 objects so to define the objects and to provide the researchers with the very, very large training set that the field has never even dreamed of. And the second purpose is to create a benchmark that measures progress, so that we can actually invite international researcher community to participate. And that’s between 2007 to 2010. It took us three plus years to put together.
LEVITT: So when you say “we,” you’re not talking about you and a co-author. It was practically an army of people doing this. Describe how many images you looked at and how many made it into the database and how much time and effort went into doing this.
LI: At the end, Steve, 15 million images downloaded from the entire internet made it into the dataset, and they are grouped into 22,000 visual categories, so very close to what Biederman’s back-of-the-envelope estimate was. But in order to filter and curate and organize, we practically downloaded close to billion images from the internet. When I say “we,” that part, the collecting part, was me and my graduate students. But a bigger “we” here is that once we’ve collected a billion images, I had the naive idea, in hindsight, that we just hire undergrads to label — because think about internet images, especially early internet, they are very, very noisy. You type the word “German Shepherd” in it, you download let’s say 2,000 German Shepherds, and they are not all German Shepherds. You can get some German Shepherd, you might get a German Shepherd poster, you might get, like, random pictures that don’t contain German Shepherd.
LEVITT: Humans wearing German Shepherd Halloween costumes.
LI: Exactly, exactly. And you might even get cartoon German Shepherds. So we have to filter all that. We have to organize all that. And I thought undergrads, you pay them $10 an hour, they’ll get it done. Clearly that was wrong.
LEVITT: There’s a billion images. You could take all the undergrads at Stanford and it would probably take them a long time to do that.
LI: So I think we estimated about 20 years if we hire undergrads and make them not sleep and eat and all that. So, I think it was 2006, Amazon put up this online marketing place called Amazon Mechanical Turk. In 2007, we heard about it and it turned out to be the life savior of ImageNet project because there you get people from all over the world to work online. It’s a two-way market. So as we put up our ImageNet cleaning task or labeling task, and then we say, “This is how much we’re willing to pay,” and then the online worker choose from thousands and tens of thousands of tasks that are available to them. And we were able to eventually, over a span of three years, hire tens of thousands of online workers from 100 plus countries. That’s how ImageNet was done. It’s after almost three years of global online worker labeling effort.
LEVITT: Now, your peers mostly thought this was a really terrible use of your time, right?
LEVITT: What kind of feedback were you getting?
LI: First of all, people are nice. They just don’t say anything. You know, I was obsessively enthusiastic. I’ve invited some of my peers to work with me on this and they politely declined. I’ve got mentors who was looking after my career and told me this would be a suicidal move. I’ve also got more unfriendly voices in our own academic conferences where people will stand up and say, “This is completely wrong. We should solve one object at a time. If you really solve one object, you know how to solve a number of objects.” So it was pretty lonely. Not too many people were betting on what I was doing for sure.
We’ll be right back with more of my conversation with Fei-Fei Li after this short break.
* * *
LEVITT: So you put ImageNet out there and, as you’ve described it, the reaction was modest. But your team had a clever idea, which seems like really was critical to ImageNet’s impact, which was to run a contest. Can you tell me about the contest? What was the nature of it? And when you first ran it, were the results what you hoped?
LI: First year’s ImageNet contest was 2010. The contest is that we took a subset of ImageNet and then we created for the research community a training dataset, so that they don’t have to deal with all the dirty work of putting together data, and then a secret testing dataset. So that when the contests begin, they have to submit their trained algorithm to our server, and we run their algorithm on our testing dataset, and then we tell them how they do and rank them. Human nature is always driven by these kind of fun competitive set up.
LEVITT: How many teams participated and what was the prize to the winner?
LI: The first year, there were dozens of teams all over the world that registered. Eventually, a couple of dozen submitted results, and I’m trying to remember if the prize was a pen or a t-shirt.
LEVITT: It was not a million dollars. This was for pride. So these were academic groups that were vying to be the best at computer vision. So, did the contest then produce amazing new models that were transformative?
LI: So the first year we got a model that was — you know, the error rate was like a third, and that’s not great. The second year was more disappointing because the number of participants decreased. The feedback from the community is: it’s too hard and too big. I knew this. In fact, I made a data set to be hard because nature is tough. It took 500 million years for us to solve vision in nature, so I wasn’t going to relent on the tough side. And the third year was 2012, and we run the results. As soon as I see that year’s winning algorithm, it was a huge error rate reduction. And they were using GPU. GPU is a type of chip called Graphics Processing Unit that is good at doing parallel computing and fast computing. It turned out to be very, very useful for training massive amount of computation for neural network algorithms. And the error rate was cut in half. I knew something has shifted.
LEVITT: So this new model that cut the error rate in half, it wasn’t just a tweak on the kinds of models, which were the state of art at the time. It was a completely — I was going to say new class of models, but it’s actually a completely old class of models, right? That was what was so good about it.
LI: Right! Steve, that’s the irony. That’s so sweet, right? Because I remember as a Ph.D. student, my very first class in machine learning, we learned about neural network. It’s almost back to the history. So this class of model is called convolutional neural network. It was invented in 1980s. You know, machine learning models are like pastas, they have different shape and flavor, and then this is one of them. They are tough to train. They have not shown any progress in looking at real-world photos. So when I saw the winning algorithm was literally convolutional neural network, I was having such a mixed feeling. I was like, “I can’t believe this,” but yet I was elated.
LEVITT: If I’m understanding, there was essentially no one on the planet who would have looked at your contest and said, “Hey, a neural net would be the way to really crack this.” But there was one group. What were they thinking? Why did they think this completely out of style model would be the answer?
LI: Here’s a lot of credit to Professor Geoff Hinton. His group has been leading neural network research for a long time, and I think just like our conviction to ImageNet, he has had long conviction to neural network because he believes it is a elegant way of taking data as they are, the raw data, and it is a way to learn patterns of the world and learn to do tasks like recognizing objects or digits. And I think what they were missing from their side is recognizing the importance of data, as well as, you know, needing the computing power.
LEVITT: Is it fair to say that neural nets have more or less won the day since then? That this was a pivotal moment, because in computer vision ever since, neural nets have been the center of how people think about the world — is that true?
LI: Yes, I think it’s fair to say, compared to other families of algorithms, neural network, its ability to scale in the model capacity really has won the day. And now we know that with data, with the high capacity neural network models, we can do a lot of things we could not have imagined.
LEVITT: I like to point out on this podcast cases where economists end up being completely wrong about things. And I can say with some confidence that if you had polled economists around the time that you were working on ImageNet, around the time when neural nets were starting to prove useful, and asked them, “Would neural nets turn out to be good for anything?” Almost every economist would have said, “They’ll be good for nothing. They will be absolutely useless.” I would say even knowing the truth, knowing that empirically neural nets do an amazing job in so many settings where predictions are being asked for, I still, after the fact, am completely flabbergasted that they work. It just is amazing to me.
LI: Steve, the truth is, it’s still quite a bit of mathematical mystery why these models work.
LEVITT: So in your book, The Worlds I See, you talk about your own personal journey, which is really remarkable. So you were born in China — and not born into a privileged situation in China either, right?
LI: We were a very normal family.
LEVITT: So how in the world did your family end up in the United States?
LI: My parents immigrated in the ’90s, so I was 15 years old when I landed in Parsippany, New Jersey. And immediately went to a public high school, Parsippany High School, and really began our life as a very typical immigrant family where parents didn’t speak English, I had to learn English, and we began with very menial jobs. And I think I speak for millions of immigrants who share this journey, but it is that kind of beginning.
LEVITT: How did you even survive financially? What kind of jobs did you and your parents do in those first years?
LI: Very beginning, I was a teenager. I began, not surprisingly, in the neighborhood Chinese restaurant kitchen. You know, every family is different. In our case, my mother especially didn’t have good health at all and I’m the single child, so I need to really become the grown up. So I somehow borrowed enough money to open a family dry cleaning business when I just got to Princeton as a freshman. And in the next almost seven years, I hired my parents and we ran the dry cleaning shop in Parsippany, New Jersey.
LEVITT: So you said in a very nonchalant way, “when I went to Princeton University,” but we’ve got to go back up, because you came to the United States when you were 15, you barely spoke English, your family had no money, I’m sure you were working a lot of hours in the kitchen of the Chinese restaurant — how did you get into Princeton? It doesn’t even make any sense.
LI: First of all, I wouldn’t be where I am without the support of the incredible public school teachers. I started in E.S.L. class. Do you know what’s E.S.L.?
LEVITT: Ah, English as a second language.
LI: Yep, I started in E.S.L. English, E.S.L. History. And I knew I love science. I loved math and physics, especially. So my public school teacher was going to support me, especially my math teacher. In the book, I talked about Mr. Bob Sabella, who was a math teacher, who became my academic mentor, academic father, because I was a lost teenager. And I find myself in his office all the time, asking first math questions, but then really like just life questions. And I was lucky that despite the financial difficulties, my parents never limited my dream. They know I wanted to be a physics student. Einstein has always been my hero, and I just wanted to do that. So somehow, I was good enough academically. I applied to, I remember, a community college in Morris County, New Jersey, which is where Parsippany was; the New Jersey State University Rutgers; and Princeton. I guess I was hedging my bet. I knew I would get into the county college, the two-year college, for free. I got a good scholarship from Rutgers. But I was very, very grateful when Princeton admitted me, I guess, based on my journey and my academic record, but also gave me pretty much a full scholarship.
LEVITT: You must have written one heck of a personal statement to cut through the chaff of all the stories that people try to tell about what makes them right for Princeton.
LI: We will have to find the right admission officer to ask that question.
LEVITT: I was fortunate enough personally to grow up not worrying about money. And my dad, who’s the one who most influenced me, he had no interest in material goods, so I ended up not caring much about things that I could buy. And consequently, at every stage of my career, I only worried about doing what felt fun or exciting or challenging. I never made education or career choices based on how much I’d earn. But I’ve noticed as I’ve mentored students, young people, who haven’t grown up with enough money to be comfortable, that financial considerations, not surprising, they’re always at the front of their mind. So with your talents, you probably could have earned 10 times as much, at least — maybe 50 times as much — going into finance or working at a tech firm. Were you not tempted to chase the money?
LI: There has been moments of my life, especially when my mom’s health deteriorated — for the longest time, we never even had health insurance because she had precondition, and this is pre-Obamacare, so she was not even allowed to purchase health insurance. And I was at the edge of freaking out all the time. So I remember I graduated from Princeton and it was a major bull market on Wall Street. And as a physics major, you can imagine you did get Wall Street job offers. And then I also remember in the middle of my Ph.D., there was a moment my mom had a major heart failure and I immediately submitted an application to a company because I was really, really worried. So there are moments of wavering. But somehow, I feel like I have a invisible guardian angel. One of them is actually in the form of my mom. She just never, ever relented. She’s like, “Just go for your dreams.” She never, ever spent a minute say, “Oh my God, look at our situation. You need to consider pragmatics.” So with that kind of love and support, I never get a pressure that you might imagine some other young student might get from family. And also, just like you, I had so much fun in science. That kind of love is so genuine. Even now, right, I wake up in the morning, if my students walk into my office talking about science, it’s absolutely my best moment. Like, I just love it. And I remembered, so in the middle of my Ph.D., I had a job offer from a private company. And I was like, “Maybe this is the moment to say goodbye to science.” And then I got a job offer from University of Illinois, Urbana Champaign as a professor. And I think that’s the invisible angel helping me there. Even though the pays are different, but I suddenly was so happy. I’m like, “Look, I got a job in science. Forget about the other thing.” So yeah, I, I feel lucky that way.
You’re listening to People I (Mostly) Admire with Steve Levitt and his conversation with Fei-Fei Li. After this short break, they’ll return to talk about why new technology is often a double edged sword.
* * *
I’m really eager to talk with Fei-Fei about her work on the responsible, maybe even benevolent, use of artificial intelligence. Who could argue with that as a goal? To be honest though, I can’t really see how that goal will be accomplished. I hope she can assure me that my skepticism is misplaced.
LEVITT: So you’re not just someone who creates A.I. tools and databases and algorithms. You’re also a thought leader when it comes to considering the societal impacts that A.I. will have. And you believe that our generation has this opportunity, or maybe the responsibility, to shape the development of A.I. in ways which will have huge ramifications for the future of humankind. Can you tell me about that?
LI: Yeah, Steve, this is also a growth on my own, right? I entered the field — like we said earlier, I had no idea whether people believed in A.I. or not. I didn’t care. I just thought it was fun. I’m a scientist and I go after curiosity and that’s what my curiosity led me to. And then it start to dawn on me, “My god, this is a transformative power. This is the driver of the next industrial revolution-scale societal change.” And then I went to Google on a sabbatical in 2017 and 2018. And that sabbatical, as a chief scientist of A.I. there, led me to realize every business will be impacted — because I was working on Google Cloud, and we talked to Japanese cucumber farmers, we talked to insurance, we talked to hospital leaders, we talked to, of course, software engineers. Every single business will be needing A.I. at some point. And that’s when I realized it is a responsibility of my generation of technologists who created all this to ensure that this technology is here to benefit humanity, not to destroy humanity. Of course, every tool is a double edged sword. I’m not naive or blind. Just because I wish this is benevolent doesn’t mean it will always remain benevolent. This is why we need to work. And this is why my own aperture has expanded. At the core, I’m still a technologist. I’m still in the lab with my students making the next generation A.I. tech. But I’ve expanded to leading the Stanford Human-Centered A.I. Institute, to having a voice, talking to policymakers, and talking to the greater public because I think it’s important we collectively get this as right as we can.
LEVITT: I guess what confuses me is what that means in practice because it seems that for the vision of A.I. to be human-centered and benevolent requires everybody to do the right thing. And I can see how you might enforce that within the scientific community, and maybe you can get Big Tech on board because of reputational concerns, but what about the Chinese government or the Russians or hackers? Is there any reason to think that we have a way to constrain them and make them play by some set of rules that we’d like?
LI: Every piece of technology has a way of being used by bad actors or adversarials, and also its unintended consequences, even if you’re using it as a so-called good actor. Civilization has been around for thousands of years. I don’t think we’ve solved all this problem, but we’ve always grappled with this problem, and we have to grapple with this problem. So all the potential bad use of A.I., we need a multi-dimensional approach. Some is laws, right? We have regulatory framework. Some is social norms. Even outside of law, there are norms we have to collectively ensure. And some is partnership. You know, think about nuclear. There were treaties and there are collective recognition that we cannot totally race to the bottom. And some are just defense systems. You just have to have defense systems like cyber security and other measures that are constantly on the lookout.
LEVITT: In talking to experts in cybersecurity, they say offense is better than defense. So in other words, if a major state actor wants to hack into something, there’s no way to stop them, essentially — that the offensive capabilities of cybersecurity are outpacing those of defense. Do you have any intuition for whether that same thing will be true of A.I., that somehow offensive uses of A.I. will be more effective than our ability to defend using A.I.?
LI: Possibly, I’ll be very honest. I think in a way, there’s something intrinsic in the nature of tools that there is possibly an asymmetry. But on the other hand, there are also forces that can help us to govern or manage this asymmetry. Like I said, by and large, we don’t want to collectively race to the bottom, and nuclear is a great example, right? You become deterrents to each other. Look, I’m not going to be saying all is going to be all right. I’m very worried about disinformation, for example. If we don’t race to create technology that can defend against disinformation, it’s going to be bad. But the good news is that we’re recognizing this. Whether it’s through industry or academia, we’re recognizing disinformation is a big deal, and we’re trying all kinds of ways to manage this, including digital authentication, watermarking, and other social engineering methods to respond to that.
LEVITT: It’s interesting that so much of the A.I. databases and tools are publicly available. And that’s definitely good for democratizing access to A.I., but my hunch is that it’s bad for constraining the bad actors, right? If a terrorist group had to build a vision training data set with millions of entries and develop the algorithm from scratch, they wouldn’t be able to do it. But so much of the capability now, it seems, in the A.I. world is off-the-shelf. And that makes this problem you’re talking about — how do you make A.I. used primarily for good? — a more difficult challenge. Is that right?
LI: Yes, but it’s more nuanced. You have raised a topic that is very much an ongoing discussion. Right now, as you and I speak, so many people, including governments, have entered this discussion, and we’ll see how this play out. I can say that I’m noticing, Steve, in a short period of time, less than a year, the alertness, awareness, and efforts about A.I. governance has drastically increased. Even the private companies are actively participating in discussions of regulation and self-regulation and other measures. So these are changing landscapes as we speak.
I loved talking with Fei Fei Li, but I can’t say she convinced me that we’ll have much luck steering the development of A.I. in largely positive directions. She raised nuclear weapons as a case where governments worked out treaties and limits. Of course, not until after we’d built enough nuclear bombs to destroy the world many times over. And many of those bombs, they still exist. Relative to A.I., nuclear weapons were an easy problem to craft public policy around because it’s hard to make nuclear weapons and only a handful of countries succeeded in doing so. With A.I., the barriers to entry are much lower. We don’t just have to worry about governments. Weaponized A.I. might become available to all sorts of groups. What do you think? Will it be possible to rein in the destructive uses of A.I.? Will the nations of the world do a better job on A.I. than they’ve done on climate change? Send us an email to PIMA@Freakonomics.com. That’s P-I-M-A@Freakonomics.com. I’d love to hear your thoughts on the issue.
LEVITT: So this is a part of the show where we always bring Morgan on to ask a listener question.
LEVEY: Hi, Steve. A listener named Eamon wrote to us about hospital data. In Canada’s province of Alberta, electronic hospital records link demographic and health information for patients. And in theory, patients own this data about themselves. Eamon sees it as an opportunity for patients to sell their information to third parties. What do you think of that idea?
LEVITT: I absolutely love the idea. I am a big believer in general that people should own their own data, not just for the usual libertarian reasons, but also for efficiency reasons. Because I think if people owned their own data, they would make different choices and it actually could make this whole advertising ecosystem much more efficient.
LEVEY: So you would see it as a boon for advertising, not for companies doing research on health?
LEVITT: Researchers might be willing to pay a lot of money for your healthcare data. That there might be information in those data that could help other people, help get to solutions. I’m not so sure about that, honestly. I’m not sure if their information in there would really be of that much value. But I don’t know. I’m not saying it wouldn’t be. I’m saying it’s not clear. Whereas, it’s completely clear if you think about, say, my data with Google, how valuable it is — and we know it’s valuable because Google or Facebook make enormous amounts of profit by watching what we do and then targeting ads to us as a consequence. Okay, but here’s why I think it would be really efficient, really a good thing for everyone if we gave people ownership of their own data.
LEVEY: Wait, don’t the big tech companies already own data on us? Why would it benefit them if we owned it?
LEVITT: Right now Facebook or Google, they watch what I do and they’re able to direct ads to me as a consequence. But I’m not really a very willing participant in that because I don’t get any share of the profit, so I don’t cooperate any more than I have to. But now let’s imagine a different world where I own my own data and I really can’t make any profit off of it unless I collaborate, say, with Google to try to make the most of it. Let’s just say Google split any profit from ads that they sell to me, 50-50 with me. Now, instead of resenting the fact that Google sends me ads, I want to tell Google everything about what I actually want. Because the best way to generate a lot of revenue would be if I tell Google everything that I buy, then ex-post, they can go back to their advertisers and say, “Hey, this ad actually worked. This clown actually bought your product after you advertised it on Instagram.” So if I were willing to cooperate with Google, tell them everything I wanted and everything I bought ex-post, then I believe that Google could get a lot more advertising revenue for my data than they’re getting now. Google, I think, doesn’t agree. I actually pitched this idea to Google a few years back, and they listened patiently to what I had to say, but they didn’t seem highly convinced.
LEVEY: Steve, I’m looking at some reports that say that Google actually already knows when we have bought something that they’ve advertised to us based on looking at our credit card data.
LEVITT: So Google might know something like that, but it’s really the difference between a world in which Google has to sneak around and try to understand what we’re doing versus actually we become a team. In a world in which I own my own data, but Google is my way to monetize it, I want Google to know what I’ve bought. I want Google to know what I want to buy. I want to buy the things that Google advertises to me, because then there’s more profit for us to share. I think the whole nature of the relationship becomes much more efficient. In principle, people should like ads, right? The more targeted the ads are, the better, because they help us make good choices. And right now, most people don’t feel like the ads they’re seeing are improving their life quality. They think they’re hurting it.
LEVEY: Eamon, thanks so much for your question. If you have a question for us, our email address is PIMA@Freakonomics.com. That’s P-I-M-A@Freakonomics.com. We read every email that’s sent, and we look forward to reading yours.
In two weeks, we’ll be back with a brand new episode featuring Michael D. Smith. He’s a professor at Carnegie Mellon University who argues that our current system of higher education is financially and morally unsustainable, and he’s got some radical ideas for fixing the problems.
SMITH: We are unintentionally trying to protect a status quo that I think if we stopped for a second, we would recognize needs to change.
Thanks for listening, and we’ll see you back soon.
* * *
People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and The Economics of Everyday Things. All our shows are produced by Stitcher and Renbud Radio. This episode was produced by Julie Kanfer with help from Lyric Bowditch, and mixed by Jasmin Klinger. We had research assistance from Daniel Moritz-Rabson. Our theme music was composed by Luis Guerra. We can be reached at email@example.com, that’s P-I-M-A@freakonomics.com. Thanks for listening.
LI: I want to defend our human babies a little bit here.
- Fei-Fei Li, professor of computer science and co-director of the Human-Centered A.I. Institute at Stanford University.
- The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of A.I., by Fei-Fei Li (2023).
- “Fei-Fei Li’s Quest to Make AI Better for Humanity,” by Jessi Hempel (Wired, 2018).
- “ImageNet Large Scale Visual Recognition Challenge,” by Olga Russakovsky, Li Fei-Fei, et al. (International Journal of Computer Vision, 2015).
- “How to Think About A.I.” series by Freakonomics Radio (2023).
- “Will A.I. Make Us Smarter?” by People I (Mostly) Admire (2023).
- “Satya Nadella’s Intelligence Is Not Artificial,” by Freakonomics Radio (2023).
- “Max Tegmark on Why Superhuman Artificial Intelligence Won’t be Our Slave,” by People I (Mostly) Admire (2021).