Episode Transcript
There’s an old saying that I’m sure you have heard: “Imitation is the sincerest form of flattery.” But imitation can easily tip into forgery. In the art world, there have been many talented forgers over the years; the Dutch painter Han van Meegeren, a master forger of the 20th century, was so good that his paintings were certified and sold — often to Nazis — as works by Johannes Vermeer, a 17th-century Dutch master. Now, there is a new kind of art forgery happening, and the perpetrators are machines.
I recently got back from San Francisco, the epicenter of the artificial intelligence boom. I was out there to do a live show, which you may have heard in our feed; and also to attend the annual American Economic Association conference. Everywhere you go in San Francisco, there are billboards for A.I. companies. The conference itself was similarly blanketed; there were sessions called “Economic Implications of A.I.” “Artificial Intelligence and Finance” and “Large Language Models and Generative A.I.” The economist Erik Brynjolfsson is one of the leading scholars in this realm, and we borrowed him for our live show, to hear his views on A.I.:
Erik BRYNJOLFSSON: The idea is that A.I. is doing these amazing things, but we want to do it in service of humans, and make sure that we keep humans at the center of all of that.
The day after Brynjolfsson came on our show, I attended one of his talks, at the conference. It was called “Will A.I. Save Us or Destroy Us?” He cited a book by the Oxford computer scientist Michael Wooldridge called A Brief History of Artificial Intelligence. Brynjolfsson read from a list of problems that Wooldridge said A.I. was “nowhere near solving.” Here are a few of them: “understanding a story and answering questions about it … human-level automated translation … interpreting what is going on in a photograph.” As Brynjolfsson is reading this list from the lectern, you’re thinking, “Wait a minute, A.I. has solved all those problems, hasn’t it?” And that’s when Brynjolfsson gets to his punchline: the Wooldridge book was published way back in 2021. The pace of A.I.’s advance has been astonishing — and some people expect it to supercharge our economy. The Congressional Budget Office has estimated economic growth over the current decade of around 1.5 percent a year. Eric Brynjolfsson thinks that A.I. could double that. He argues that many views of A.I. are either too fearful or too narrow.
BRYNJOLFSSON: Too many people think of machines as just trying to imitate humans. But machines can help us do new things we never could have done before. And so we want to look for ways that machines can complement humans, not simply imitate or replace them.
So that sounds promising. But what about the machines that are “just imitating” humans? What about machines that are, essentially, high-tech forgers? Today on Freakonomics Radio, we will hear from someone who’s trying to thwart these machines, on behalf of artists.
Ben ZHAO: They take decades to hone their skill, so when that’s taken against their will, that is sort of identity theft.
Ben Zhao is a professor of computer science at the University of Chicago. He is by no means a techno-pessimist; but he is not so bullish on artificial intelligence.
ZHAO: There is an exceptional level of hype. That bubble is in many ways in the middle of bursting right now.
But Zhao isn’t just waiting for the bubble to burst. It’s already too late for that:
ZHAO: Because the harms are happening to people is in real time.
Zhao and his team have been building tools to prevent some of those harms. When it comes to stolen art, the tool of choice is a dose of poison that Zhao slips into the A.I. system. There’s another old saying you probably know: “It takes a thief to catch a thief.” How does that work in the time of A.I.? Let’s find out.
* * *
Ben Zhao and his wife Heather Zheng are both computer scientists at the University of Chicago, and they run their own lab.
ZHAO: We call it the SAND Lab.
Stephen DUBNER: Which stands for?
ZHAO: Security, Algorithms, Networking, and Data. Most of the work that we do has been to use technology for good, to limit the harms of abuses and attacks, and protect human beings and their values, whether it’s personal privacy, or security, or data, or your identity.
DUBNER: What’s your lab look like? If we showed up, what do we see? Do we see people milling around, talking, working on monitors together?
ZHAO: It’s really quite anti-climactic. We’ve had some TV crews come by, and they’re always expecting some sort of secret lair. And then they walk in and some bunch of cubicles. Our students all have standing desks. The only wrinkle is that I’m at one of the standing desks in the room. I don’t usually sit in my office. I sit next to them, a couple of cubicles over, so that they don’t get paranoid about me watching their screen.
DUBNER: When there’s a tool that you’re envisioning, or developing, or perfecting, is it all-hands-on-deck? Are the teams relatively small? How does that work?
ZHAO: Well, there’s only a handful of students in my lab to begin with. So, all-hands-on-deck is like, what, seven or eight Ph.D. students, plus us? Typically speaking, the projects are a little bit smaller just because we’ve got multiple projects going on, and so people are partitioning their attention and work energy at different things.
DUBNER: I read on your web page, Ben, you write, “I work primarily on adversarial machine learning and tools to mitigate harms of generative A.I. models against human creatives.” So that’s an extremely compelling bio line. If that was a dating profile and I were in A.I., I would say, “Whoa, swiping hard left.” But if I’m someone concerned about these things — oh my goodness, you’re the dream date! So can you unpack that for me?
ZHAO: Adversarial machine learning is a shorthand for this interesting research area at the intersection of computer security and machine learning. Anything to do with attacks, defenses, privacy concerns, surveillance, all these subtopics as related to machine learning and A.I. That’s what I’ve been working on mostly for the last, decade? For more than two years, we’ve been focused on how the misuse and abuse of these A.I. tools can harm real people, and trying to build research tools and technology tools to try to reduce some of that harm, to protect regular citizens and in particular human creatives like artists and writers.
Before he got into his current work, protecting creatives, Zhao made a tool for people who are worried that Siri or Alexa are eavesdropping on them — which, now that I’ve said their names, they may be. He called this tool the Bracelet of Silence.
ZHAO: So that’s from my D&D days. It’s a fun little project. We had done prior work in ultrasonics and modulation effects, when you have different microphones, and how they react to different frequencies of sound. One of the effects that people have been observing is that you can make microphones vibrate in a frequency that they don’t want to. We figured out that we could build a set of little transducers. You can imagine a fat bracelet, sort of like cyberpunk kind of thing, with, 24 or 12, I forget the exact number, little transducers that are hooked onto the bracelet like gemstones —
DUBNER: The one I’m looking at looks like 12. I also have to say, Ben, it’s pretty big. It’s a pretty big bracelet to wear around just to silence your Alexa or HomePod.
ZHAO: Well, hey, you got to do what you got to do, and hopefully other people will make it much smaller. We’re not in the production business. What it does is it radiates a carefully attuned pair of ultrasonic pulses in such a way that commodity microphones anywhere within reach will — against their will — begin to vibrate at a normal audible frequency. They basically generate the sound that’s necessary to jam themselves. When we first came out with this thing, a lot of people were very excited. Privacy advocates, public figures who are very concerned not necessarily about their own Alexa, but the fact that they had to walk into public places all the time. You’re really trying to prevent that hidden microphone eavesdropping on a private conversation.
DUBNER: Okay, that’s the Bracelet of Silence. I’d like you to describe another privacy tool you built, the one called Fawkes.
ZHAO: Fawkes is a fun one. In 2019, I was brainstorming about some dangers that we have in the future. — and this is not even generative A.I., this is just sort of classification and facial recognition. One of the things that we came up with was this idea that A.I. is going to be everywhere and therefore anyone can train any model, and therefore people can basically train models of you. At the time, it was not about deepfakes, it was about surveillance, and what would happen if people just went online, took your entire Internet footprint — which of course today is massive — scrape all your photos from Facebook and Instagram and LinkedIn, and then build this incredibly accurate facial recognition model of you without your knowledge, much less permission. And we built this tool that basically allows you to alter your selfies, your photos, in such a way that it made you look more like someone else than yourself.
DUBNER: Does it make you look more like someone else in the actual context that you care about, or only in the version when it’s being scraped?
ZHAO: That’s right, only in the version when it’s being used to build a model against you. But the funny part was that we built this technology, we wrote the paper, and on the week of submission — this was 2020 — we were getting ready to submit that paper, I remember it distinctly, that was when Kashmir Hill at The New York Times came out with her story on Clearview A.I. And that was just mind-blowing, because I had been talking to our students for months about having to build for this dark scenario, and literally here’s the New York Times saying, “Yeah, this is today, and we are already in it.” That was disturbing on many fronts, but it did make writing the paper a lot easier. We just cited the New York Times article and said, here it is already.
DUBNER: Clearview A.I. is funded how?
ZHAO: It was a private company. I think it’s still private. It’s gone through some ups and downs, since the New York Times article. They had to change their revenue stream. They no longer take third-party customers. Now they only work with government and law enforcement.
DUBNER: Okay, so Fawkes is the tool you invented to fight that kind of facial-recognition abuse. Is Fawkes an app, or software that anyone can use?
ZHAO: Fawkes was designed as a research paper and algorithm, but we did produce a little app. I think it went over a million downloads. We stopped keeping track of it, but we still have a mailing list, and that mailing list is actually how some artists reached out.
When Ben Zhao says that “some artists reached out” — that was how he started down his current path, defending visual artists. A Belgian artist named Kim van Deun who’s known for her illustrations of fantasy creatures, sent Zhao an invitation to a town hall meeting about A.I. artwork. It was hosted by a Los Angeles organization called Concept Art Association, and it featured representatives from the U.S. Copyright Office. What was the purpose of this meeting? Artists had been noticing that when people searched for their work online, the results were often A.I. knockoffs of their work. It went even further than that: their original images had been scraped from the internet and used to train the A.I. models that can generate an image from a text prompt. You’ve probably heard of these text-to-image models, maybe even used some of them: there’s Dall-E from Open A.I., Imagen from Google, Image Playground from Apple, Stable Diffusion from Stability A.I., and Midjourney, from the San Francisco research lab of the same name.
ZHAO: These companies will go out and they’ll run scrapers, little tools that go online and basically suck up any semblance of imagery, especially high-quality imagery from online websites.
In the case of an artist like Van Deun, this might include her online portfolio, which is something you want to be easily seen by the people you want to see it — but you don’t want “sucked up” by an A.I.
ZHAO: It would download those images and run them through an image classifier to generate some set of labels and then take that pair of images and their labels and then feed that into the pipeline to some text-to-image model.
DUBNER: So Ben, I know that some companies, including Open A.I., have announced programs to let content creators opt out of A.I. training. How meaningful is that?
ZHAO: Well, opting out assumes a lot of things. It assumes benign acquiescence from the technology makers.
DUBNER: “Benign acquiescence” meaning they have to actually do what they say they’re going to do?
ZHAO: Yeah, exactly. Opting out is toothless because you can’t prove it in the machine learning business. Even if someone completely went against their word and said, okay, here’s my opt-out list and then immediately train on all of their content. You just lack the technology to prove it. And so what’s to stop someone from basically going back on their word when we’re talking about billions of dollars at stake. Really, you’re hoping and praying someone’s being nice to you.
So Ben Zhao wanted to find a way to help artists fight back against their work being either forged or stolen by these mimicry machines.
ZHAO: A big part of their misuse is when they assume the identity of others. So this idea of right of publicity and the idea that we own our faces, our voices, our identity, our skills, and work product, that is very much a core of how we define ourselves. For artists, it’s the fact that they take decades to hone their skill and to become known for a particular style. So when that’s taken against their will without their permission, that is a type of identity theft, if you will.
In addition to identity theft, there can be the theft of a job, a livelihood.
ZHAO: Right now, many of these models are being used to replace human creatives. If you look at some of the movie studios, the gaming studios or publishing houses, artists and teams of artists are being laid off. One or two remaining artists are being told, “Here, you have a budget, here’s Midjourney, I want you to use your artistic vision and skill to basically craft these A.I. images to replace the work product of the entire team who’s now been laid off.”
So Zhao’s solution was to poison the system that was causing this trouble.
ZHAO: Poison is sort of a technical term in the research community. Basically it means manipulating training data in such a way to get A.I. models to do something perhaps unexpected, perhaps more to your goals, than the original trainers intended to.
They came up with two poisoning tools: one called Glaze, the other Nightshade.
ZHAO: Glaze is all about making it harder to target and mimic individual artists. Nightshade is a little bit more far-reaching. Its goal is primarily to make training on internet-scraped data more expensive than it is now. Perhaps more expensive than actually licensing legitimate data — which ultimately is our hope, that this would push some of these A.I. companies to seek out legitimate licensing deals with artists so that they can properly be compensated.
DUBNER: Can you just talk about the leverage and power that these A.I. companies have, and how they’ve been able to amass that leverage?
ZHAO: We’re talking about companies and stakeholders who have trillions in market cap, the richest companies on the planet by definition. So that completely changes the game. It means that when they want things to go a certain way, whether it’s lobbyists on Capitol Hill, whether it’s media control and inundating journalists, and running ginormous national expos and trade shows of whatever they want, nothing is off-limits. That completely changes the power dynamics of what you’re talking about. The closest analogy I can draw on is in the early 2000s, we had music piracy. Folks who are old enough to remember, that was a free-for-all. People could just share whatever they wanted. And of course, there were questions of legality, and copyright violations, and so on. But there, it was very different from what it is today. Those who are with the power and the money and the control are the copyright holders, so the outcome was very clear.
DUBNER: Well, it took a while to get there, right? Napster really thrived for several years before it got shut down. But in that case you’re saying that the people who not necessarily generated, but owned or licensed the content, were established and rich enough themselves so that they could fight back against the intruders?
ZHAO: Exactly. You had armies of lawyers. When you consider that sort of situation and how it is now, it’s the complete polar opposite.
DUBNER: Meaning, it’s the bad guys who have all the lawyers.
ZHAO: Well, I wouldn’t say necessarily bad guys, but certainly the folks who in many cases are pushing profit motives that perhaps bring harm to less-represented minorities who don’t have the agency, who don’t have the money to hire their own lawyers and who can’t defend themselves.
DUBNER: I mean, that has become kind of an ethic of a lot of business in the last 20, 30 years, especially coming out of Silicon Valley. You know, you think about how Travis Kalanick used to talk about Uber — it’s much easier to just go into a big market like New York where something like Uber would be illegal and just let it go, let it get established. And then let the city come and sue you after it’s established. So, better to ask for forgiveness than permission.
ZHAO: These companies are basically exploiting the fact that we know lawsuits and enforcement of new laws are going to take years. And so the idea is, let’s take advantage of this time, and before these things catch up, we’re already going to be established. We already are going to be essential and we already are going to be making billions. And then we’ll worry about the legal costs because really to many of them, the legal costs and the penalties that are involved, billions of dollars, is really a drop in the bucket.
Indeed, the biggest tech firms in the world are all racing one another to the top of the A.I. mountain. They’ve all invested heavily in A.I., and the markets have, so far at least, rewarded them: the share prices of the so-called Magnificent Seven stocks — Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla — rose more than 60 percent in 2024, and these seven stocks now represent 33 percent of the value of the S&P 500. This pursuit of more and better A.I. will have knock-on effects too; consider their electricity needs. One estimate finds that building the data centers to train and operate the new breed of A.I. models will require 60 gigawatts of energy capacity; that’s enough to power roughly a third of the homes in the U.S. In order to generate all that electricity — and to keep their commitments to clean energy — OpenAI, Amazon, Google, Meta, and Microsoft have all invested big in nuclear power. Microsoft recently announced a plan to help revive Three Mile Island. If you want to learn more about the potential for a nuclear-power renaissance in the U.S., we made an episode about that: No. 516, called “Nuclear Power Isn’t Perfect. Is It Good Enough?” Meanwhile, do a handful of computer scientists at the University of Chicago have any chance of slowing down this A.I. juggernaut?
ZHAO: We will actually generate a nice-looking cow with nothing particularly distracting in the background, and the cow is staring you right in the face.
* * *
In his computer science lab at the University of Chicago, Ben Zhao and his team have created a pair of tools designed to prevent artificial intelligence programs from exploiting the images created by human artists. These tools are called Glaze and Nightshade. They work in similar ways, but with different targets. Glaze came first.
ZHAO: Glaze is all about how do we protect individual artists so that a third party does not mimic them using some local model. It’s much less about these model-training companies than it is about individual users who say, gosh, I like so-and-so’s art, but I don’t want to pay them. So in fact, what I’ll do is I’ll take my local copy of a model, I’ll fine-tune it on that artist’s artwork, and then have that model try to mimic them and their style, so that I can ask the model to output artistic works that look like human art from that artist, except I don’t have to pay them anything.
And how about Nightshade?
ZHAO: What it does is it takes images, it alters them in such a way that they basically look like they’re the same, but to a particular A.I. model that’s trying to train on this, what it sees are the visual features that actually associate it with something entirely different. For example, you can take an image of a cow eating grass in a field. And if you apply it to Nightshade, perhaps that image instead teaches not so much the bovine cow features, but the features of a 1940s pickup truck. What happens then is that as that image goes into the training process, that label of “this is a cow” will become associated in the model that’s trying to learn about what does a cow look like. It’s going to read this image. And in its own language, that image is going to tell it that a cow has four wheels, a cow has a big hood and a fender and a trunk. Nightshade images tend to be much more potent than usual images so that even when they’ve just seen a few hundred of them, they are willing to throw away everything that they’ve learned from the hundreds of thousands of other images of cows, and declare that its understanding has now adapted to this new understanding that, in fact, cows have a shiny bumper and four wheels. Once that has happened, someone asking the model “Give me a cow eating grass,” the model might generate a car with a pile of hay on top.
The underlying process of creating this A.I. poison is, as you might imagine, quite complicated. But for an artist who’s using Nightshade, who wants to sprinkle a few invisible pixels of poison on their original work, it’s pretty straightforward.
ZHAO: There’s a couple of parameters about intensity, how strongly you want to change the image, you set the parameters and then you hit “go” and out comes an image that may look a little bit different. Sometimes there are tiny little artifacts that if you blow it up, you’ll see. But in general, it basically looks like your old image, except with these tiny little tweaks everywhere in such a way that the A.I. model, when it sees it, will see something entirely different.
That “entirely different” thing is not chosen by the user; it’s Nightshade that decides whether your image of a cow becomes a 1940s pickup truck versus, say, a cactus.
And there’s a reason for that.
ZHAO: The concept of poisoning is that you are trying to convince the model that’s training on these images that something looks like something else entirely. So we’re trying, for example, to convince a particular model that, a cow has four tires and a bumper. But in order for that to happen, you need numbers. You don’t need millions of images to convince it, but you need a few hundred. And of course, the more the merrier. And so you want everybody who uses Nightshade around the world, whether they’re photographers or illustration or graphics artists, you want them all to have the same effect. So whenever someone paints a picture of a cow, takes a photo of a cow, draws an illustration of a cow, draws a clipart of a cow, you want all those Nightshaded effects to be consistent in their target. In order to do that, we have to take control of what the target actually is, ourselves, inside the software. If you gave users that level of control, then chances are people would choose very, different things. Some people might say, I want my cow to be a cat. I want my cow to be the sun rising. If you were to do that, the poison would not be as strong.
And what do the artificial intelligence companies think about this Nightshade being thrown at them? A spokesperson for Open A.I. recently described data poison as a “type of abuse.” A.I. researchers previously thought that their models were impervious to poisoning attacks. But Ben Zhao says that the A.I. training models are actually quite easy to fool. His free Nightshade app has been downloaded over two million times, so it’s safe to say that plenty of images have already been “shaded.” But how can you tell if Nightshade is actually working?
ZHAO: You probably won’t see the effects of Nightshade. If you see it in the wild, models give you wrong answers to things that you’re asking for. But the people who are creating these models are not foolish, they are highly trained professionals. So they’re going to have lots of testing on any of these models. We would expect that effects of Nightshade would actually be detected in the model-training process. It will become a nuisance. And perhaps what really will happen is that certain versions of models, post-training, will be detected to have certain failures inside them and perhaps they’ll have to roll them back. So I think really that’s more likely to cause delays and more likely to cause costs of these model training processes to go up. The A.I. companies, they really have to work on millions, potentially billions of images. So it’s not necessarily the fact that they can’t detect Nightshade on a particular image, it’s the question of can they detect Nightshade on a billion images in a split second with minimal cost? Because any one of those factors that goes up significantly will mean that their operation becomes much, much more expensive, and perhaps it is time to say, well, maybe we’ll license artists and get them to give us legitimate images that won’t have these questionable things inside them.
DUBNER: Is it the case that your primary motivation here really was an economic one, of getting producers of labor — in this case, artists — simply to be paid for their work, that their work was being stolen?
ZHAO: Yeah, I mean, really, it boils down to that. I came into it not so much thinking about economics as I was just seeing people that I respected and had affinity for be severely harmed by some of this technology. In whatever way that they can be protected that’s ultimately the goal. In that scenario, the outcome would be licensing so that they can actually maintain a livelihood and maintain the vibrancy of that industry.
DUBNER: When you say these are people you respect and have affinity for, I’m guessing you, being an academic computer scientist, that you also have respect and affinity for — and I’m sure many people — in the A.I., machine-learning community on the firm side, though, right?
ZHAO: Yes, yes, of course, colleagues and former students in that space.
DUBNER: And how do they feel about Ben Zhao?
ZHAO: It’s quite interesting, really. I go to conferences the same as I usually do. And many people resonate with what we’re trying to do. We’ve gotten a bunch of awards and such from the community. As far as folks who are actually employed by some of these companies, some of them, I have to say appreciate our work. They may or may not have the agency to publicly speak about it. But, lots of private conversations where people are very excited. I will say that, yeah, there’s been some cooling effects, burned bridges with some people. I think it really comes down to how you see your priorities. It’s not so much about where employment lies, but it really is about how personally you see the value of technology versus the value of people. And oftentimes it’s a very binary decision. People tend to go one or the other rather hard, I think most of these bigger decisions — acquisition, strategy, and whatnot — are largely in the hands of executives way up top. These are massive corporations, and many people are very much aware of some of the stakes and perhaps might disagree with some of the technological stances that are being taken. But everybody has to make a living. Big tech is one of the best ways to make a living. Obviously, they compensate people very well. I would say there’s a lot of pressure there as well. We just had that recent news item that the young whistleblower from OpenAI just tragically passed away.
Zhao is talking here about Suchir Balaji, a 26-year-old former researcher at OpenAI, the firm best known for creating ChatGPT. Balaji died by apparent suicide in his apartment in San Francisco; he had publicly charged OpenAI with potential copyright violations, and he left the company because of ethical concerns.
ZHAO: Whistleblowers like that are incredibly rare, because the risk that you’re taking on when you publicly speak out against your former employer, that is tremendous courage, that is an unbelievable act. It’s a lot to ask.
DUBNER: I feel that we don’t speak so much about ethics in the business world. I know they teach it in business schools, but my feeling is that by the time you’re teaching the ethics course in the business school, it’s because things are already in tough shape. Many people obviously have strong moral and ethical makeups, but I feel there is an absence of courage. And since you just named that word, you said you have to have an enormous amount of courage to stand up for what you think may be right, and since there is so much leverage in these firms, as you noted, I’m curious if you have any message to the young employee or the soon-to-be graduate who says, Yeah, sure, I would absolutely love to go work for an A.I. firm because it’s bleeding edge, it pays well, it’s exciting and so on, but they’re also feeling like it’s contributing to a pace of technology that is too much for humankind right now. What would you say to that person? How would you ask them to examine if not their soul or something, at least their courage profile?
ZHAO: Yeah, what a great question. I mean, it may not be surprising, but as a computer science professor, I actually have these conversations relatively often. This past quarter, I taught many second-year and third-year computer science majors, and many of them came up to me in office hours and asked very similar questions. They said, look, I really want to push back on some of these harms. On the other hand, look at these job opportunities, here’s this great golden ticket to the future, and what can you do? It’s fascinating. I don’t blame them if they make any particular decision. But I applaud them for even being aware of some of the issues that I think many in the media, and many in Silicon Valley certainly, have trouble recognizing. There is a level ground truth underneath all of this, which is that these models are limited. There is an exceptional level of hype, like we’ve never seen before. That bubble is in many ways in the middle of bursting right now.
DUBNER: Why do you say that?
ZHAO: There’s been many papers published on the fact that these generative A.I. models are well at their end in terms of training data. To get better, you need something like double the amount of data that has ever been created by humanity. And you’re not going to get that by buying Twitter or by licensing from Reddit or New York Times or anywhere. You see now recent reports about how Google and Open A.I. are having trouble improving upon their models. It’s common sense, they’re running out of data. And no amount of scraping or licensing will fix that.
Bloomberg News recently reported that Open A.I., Google, and Anthropic have all had trouble releasing their next-generation A.I. models because of this plateauing effect. Some commentators say that A.I. growth overall may be hitting a wall. In response to that, Open A.I. C.E.O. Sam Altman tweeted, “There is no wall.” Ben Zhao is in the “wall” camp.
ZHAO: And then, of course, just the fact that there are very few legitimate revenue-generating applications that will even come close to compensating for the amount of investment that V.C.s and these companies are pouring in. Obviously, I’m biased, doing what I do, but I’ve thought about this problem for quite some time. And honestly, these are great interpolation machines, these are great mimicry machines, but there’s only so many things that you can do with them. They are not going to produce entire movies, entire TV shows, entire books to anywhere near the value that humans will actually want to consume. They can disrupt, and they can bring down the value of a bunch of industries, but they are not going to actually generate much revenue in and of themselves. I see that bubble bursting, and so what I say to these students oftentimes is that things will take their course, and you don’t need to push back actively. All you need to do is to not get swept along with the hype. When the tide turns, you will be well-positioned. You will be better positioned than most to come out of it having a clear head and being able to go back to the fundamentals of, why did you go to school? Why did you go to University of Chicago, and all the education that you’ve undergone, to use your human mind, because it will be shown that humans will be better than A.I. will ever pretend to be.
* * *
It’s easy to talk about the harms posed by artificial intelligence, but let’s not ignore the benefits. That’s where we started this episode, hearing from the economist Erik Brynjolfsson. If you think about something like the medical applications alone, A.I. is plainly a major force; and just to be witness to a revolution of this scale is exciting. Its evolution will continue in ways that, of course, we can’t predict. But as the University of Chicago computer scientist Ben Zhao has been telling us today, A.I. growth may be slowing down — and: the law may be creeping closer to some of these companies too. OpenAI and Microsoft are both being sued by the New York Times; Anthropic is fighting claims from Universal Music that it misused copyrighted lyrics; and, related to Zhao’s work, a group of artists are suing Stability AI, Midjourney, and DeviantArt for copyright infringement and trademark claims. But Zhao says that the argument about A.I. and art is about more than just intellectual-property rights.
ZHAO: Art is interesting when it has intention, when there’s meaning and context. So when A.I. tries to replace that, it has no context and meaning. Art replicated by A.I., generally speaking, loses the point. It is not about automation. I think that is a mistaken analogy that people oftentimes bring up. They say, “Well, what about the horse and buggy and the automobile?” No, this is actually not about that at all. A.I does not reproduce human art at a faster rate. What A.I. does is it takes past samples of human art, shakes it in a kaleidoscope, and gives you a mixture of what has already existed before.
DUBNER: So when you talk about the scope of the potential problems, everything from the human voice, the face, pieces of art — basically anything ever generated that can be reproduced in some way — it sounds like you are, no offense, a tiny little band of Don Quixotes there in the middle of the country, tilting at these massive global windmills of artificial intelligence and technology overlordship, and the amount of money being invested right now in A.I. firms is really almost unimaginable. They could probably start up a thousand labs like yours within a week to crush you. Not that I’m encouraging that, but I’m curious — on the one hand, you said, well, there is a bubble coming because of, let’s call it data limitations. On the other hand, when there’s an incentive to get something for less, or for nothing, and to turn it into something else that’s profitable in some way — whether for crime, or legitimate-seeming purposes — people are going to do that. And I’m just curious how hopeless or hopeful you may feel about this kind of effort.
ZHAO: What’s interesting about computer security is that it’s not necessarily about numbers. If it’s a brute-force attack, I can run through all your PIN numbers, and it doesn’t matter how ingenious they are, I will eventually come up with the right one. But for many instances, it is not about brute force and resource riches. So yeah, I am hopeful. We’re looking at vulnerabilities that we consider to be fundamental in some of these models, and we’re using them to slow down the machine. I don’t necessarily wake up in the morning thinking, Oh yeah I’m going to topple OpenAI or Google or anything like that. That’s not necessarily the goal. I see this as more of a process in motion. This hype is a storm that will eventually blow over. And how I see my role in this is not so much to necessarily stop the storm. I’m more, if you will, a giant umbrella. I’m trying to cover as many people as possible and shield them from the short-term harm.
DUBNER: What gives you such confidence that the storm will blow over, or that there will be maybe more umbrellas? Other than what you pointed out as the data limitations in the near term — and maybe you know better than all of us, maybe data limitations and computing limitations are such that the fears that many people have will never come true — but it doesn’t seem like momentum is moving in your favor. It seems it’s moving in their favor.
ZHAO: I would actually disagree, but that’s okay. We can have that discussion, right?
DUBNER: Look, you’re the guy that knows stuff. I’m just asking the questions. I don’t know anything about this.
ZHAO: No no, I think this is a great conversation to have because back in 2022 or early 2023, when I used to talk to journalists the conversation was very, very different. Conversation was always, “When is A.G.I. coming?” You know, “What industries will be completely useless in a year or two?” It was never the question of like, are we going to get return on investment for these billions and trillions of dollars? Are these applications going to be legit? So even in the year and a half since then, the conversation has changed materially, because the truth has come out. These models are actually having trouble generating any sort of realistic value. I’m not saying that they’re completely useless. There are certain scientific applications or daily applications where it is handy. But it is far, far less than what people had hoped them to be. And so, how do I believe it? Part of this is hubris. I’ve been a professor for 20 years. I’ve been trained, or I’ve been training myself, to believe in myself in a way. Another answer to this question is that it really is irrelevant, because the harms are happening to people in real time. And so it’s not about will we eventually win, or will this happen eventually in the end? It’s the fact that people’s lives are being affected on a daily basis, and if I can make a difference in that, then that is worthwhile in and of itself, regardless of the outcome.
DUBNER: If I were a cynic, or maybe a certain kind of operative, I might think that maybe Ben Zhao is the poison. Maybe, in fact, you’re a bot, talking down the industry, both in intention and in capabilities — and who knows for what reason, maybe you’re even shorting the industry in the markets or something. I doubt that’s true. But, we’ve all learned to be suspicious of just about everybody these days. Where would you say you fall on the spectrum of makers versus hardcore activists, let’s say? Because I think in every realm throughout history, whenever there’s a new technology, there are activists who overreact, and often protest against new technologies in ways that, in retrospect, are revealed to have been either shortsighted or self-interested. That’s a big charge I’m putting on you. Persuade me that you are neither shortsighted nor self-interested, please.
ZHAO: Sure. Very interesting. Okay, let me unpack that a little bit there. The thing that allows me to do the kind of work that I do now, I recognize, is quite a privilege. The position and being a senior tenured professor and honestly, I don’t have many of the pressures that some of my younger colleagues do.
DUBNER: You have your own lab at the University of Chicago with your wife. When I read about this, I think, how did you get the funding? Did you have some kind of blackmail material on the UChicago budget people?
ZHAO: No, I mean, all of our grants are quite public. And I’m pretty sure that I’m not the most well-funded professor in the department. I run a pretty regular lab. We write a few grants. But it’s nothing earth-shaking. It’s just what we turn our time towards, that’s all. There’s very little that drives me these days outside of just wanting my students to succeed. I don’t have the pressures of needing to establish a reputation or explain to colleagues who I am and why I do what I do. So in that sense, I almost don’t care. In terms of self-interest, none of these products have any money attached to them in any way, shape or form. And I’ve tried very, very hard to keep it that way. There’s no startup, there’s no hidden profit motive or revenue here. So that simplifies things for me.
DUBNER: When you say that you don’t want to commercialize these tools, I assume the University of Chicago is not pressing you to do so?
ZHAO: No. The university always encourages entrepreneurship, they always encourage licensing, but they certainly have no control over what we do or don’t do with our technology. This is sort of the reality of economics and academic research. We as a lab have a stream of Ph.D. students that come through, and we train them. They do research along the way, and then they graduate and then they leave. For things like Fawkes, where this was the idea, here is the tool, here’s some code, we put that out there. But ultimately, we don’t expect to be maintaining that software for years to come. We just don’t have the resources.
DUBNER: That sounds like a shame if you come up with a good tool?
ZHAO: Well, the idea behind academic research is always that if you have the good ideas and you demonstrate it, then someone else will carry it across the finish line, whether that’s a startup or research lab elsewhere. But somebody with resources, who sees that need and understands it, will go ahead and produce that physical tool or make that software and actually maintain it.
DUBNER: Since you’re not going to commercialize or turn it into a firm; let’s say you continue to make tools that continue to be useful, and that they scale up and up and up. And let’s say that your tools become an integral part of the shield against villainous technology, let’s just call it. Are you concerned that it will outgrow you and will need to be administered by other academics, or maybe governments and so on?
ZHAO: At a high level, I think that’s great. I think if we get to that point, that will be a very welcome problem to have. We are in the process of exploring perhaps what a nonprofit organization will look like, because that would sort of make some of these questions transparent, it would —
DUBNER: That’s what Elon Musk once said about Open A.I., I believe, correct?
ZHAO: Well, yeah, very different type of nonprofit, I would argue. I’m more interested in being just the first person to walk down a particular path and encouraging others to follow. So I would love it if we were not the only technology in this space. Every time I see one of these other research papers that works to protect human creatives, I applaud all that. In order for A.I. and human creativity to coexist in the future, they have to have a complementary relationship, and what that really means is that A.I. needs human work product, or images, or text, in order to survive. So they need humans, and humans really need to be compensated for this work that they are producing. Otherwise, if human artistry dies out, then A.I. will die out because they’re going to have nothing new to learn on, and they’re just going to get stale and fall apart.
DUBNER: I’m feeling a strong Robin Hood vibe here, stealing from the rich, giving to the poor. But also what you’re describing, your defense mechanism, it’s like you are a bow, but you don’t have an arrow. But if they shoot an arrow at you, then you can take the arrow and shoot it back at them and hit them where it really hurts.
ZHAO: Over the last couple of years, I’ve been practicing lots of fun analogies. Barbed wire is one. The large Doberman in your backyard. One particular funny one is, we’re the hot sauce that you put on your lunch. So if that unscrupulous coworker steals your lunch repeatedly, they get a tummy ache.
DUBNER: But wait a minute — you have to eat your lunch, too. That doesn’t sound very good.
ZHAO: Well, you eat the portion that you know is good and you leave out some stuff that —
DUBNER: Got it. Got it. Can you maybe envision or describe what might be a fair economic solution here, a deal that would let the A.I. models get what they want without the creators being ripped off?
ZHAO: Boy, that’s a bit of a loaded question because honestly, we don’t know. It really comes down to how these models are being used. Ultimately, I think what people want is creative content that’s crafted by humans. In that sense, the fair system would be generative A.I. systems that stayed out of the creative domain, that continue to let human creatives do what they do best, to create really, truly imaginative ideas and visuals and then use generative A.I. for domains where it is more reasonable. For example, conversational chat bots seem like a reasonable use for them, as long as they don’t hallucinate.
DUBNER: I’m just curious why you care about artists. Most people, at least in positions of power, don’t seem to go to bat for people who make stuff. And when I say most people in positions of power, I would certainly include most academic economists. So of all the different labor forces that are being affected by A.I. — there are retail workers, people in manufacturing, medicine, on and on and on — why go to bat for artists?
ZHAO: Certainly, I know what it’s not, because I’m not an artist. I’m not particularly artistic. Some people can say there’s an inkling of creativity in what we do, but it’s not nearly the same. I guess what I will say is creativity is inspiring. Artists are inspiring. Whenever I think back to what I know of art and how I appreciate art, I think back to college. I went to Yale, and I remember many cold Saturday mornings, I would walk out and there’s piles of snow and everything would be super-quiet and would take a short walk over to the Yale Art Gallery, and it was amazing. I would be able to wander through halls of masterpieces. Nobody there except me and maybe a couple of security guards. It’s always been inspiring to me how people can see the world so differently through the same eyes, through the same physical mechanism. That is how I get a lot of my research done. I try to see the world differently and it gives me ideas. So when I meet artists and when I talk to artists to see what they can do, to see the imagination that they have at their disposal, that I see nowhere else, you know, creativity — it’s the best of humanity. What else is there?
That was Ben Zhao. He helps run the SAND Lab at the University of Chicago. You can see a lot of their work on the SAND Lab website. While you’re online, you may also want to check out a new museum scheduled to open this year in Los Angeles; it’s called Dataland, and it’s the world’s first museum devoted to art that’s generated by A.I. Maybe I’ll run into Ben Zhao there someday — and maybe I’ll run into you too. I will definitely be in L.A. soon, on February 13; we’re putting on Freakonomics Radio Live, at the gorgeous Ebell Theater. Tickets are at freakonomics.com/liveshows. I hope to see you there.
* * *
Freakonomics Radio is produced by Stitcher and Renbud Radio. This episode was produced by Theo Jacobs. The Freakonomics Radio Network staff also includes Alina Kulman, Augusta Chapman, Dalvin Aboagye, Eleanor Osborne, Ellen Frankman, Elsa Hernandez, Gabriel Roth, Greg Rippin, Jasmin Klinger, Jeremy Johnston, Jon Schnaars, Morgan Levey, Neal Carruth, Sarah Lilley, and Zack Lapinski. Our theme song is “Mr. Fortune,” by the Hitchhikers; our composer is Luis Guerra.
Sources
- Erik Brynjolfsson, professor of economics at Stanford University
- Ben Zhao, professor of computer science at the University of Chicago
Resources
- “The AI lab waging a guerrilla war over exploitative AI,” by Melissa Heikkilä (MIT Technology Review, 2024)
- “Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models,” by Shawn Shan, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y. Zhao (Cornell University, 2023)
- “Nightshade: Prompt-Specific Poisoning Attacks on Text-to-Image Generative Models,” by Shawn Shan, Wenxin Ding, Josephine Passananti, Stanley Wu, Haitao Zheng, and Ben Y. Zhao (Cornell University, 2023)
- “A Brief History of Artificial Intelligence: What It Is, Where We Are, and Where We Are Going,” by Michael Woodridge (2021)
Extras
- “Nuclear Power Isn’t Perfect. Is It Good Enough?” by Freakonomics Radio (2022)
Comments