Episode Transcript
In last week’s episode with the remarkable Max Tegmark, we covered topics ranging from the origin of the universe to the disturbing reality of slaughter bots — A.I. enabled drones built to kill. Today, we continue our conversation discussing how artificial intelligence is affecting our lives already in ways we aren’t even aware. And what Max is doing to ensure that A.I. becomes a force for good rather than evil as a co-founder of the Future of Life Institute, an organization that works to prevent global-technology driven catastrophes.
Max TEGMARK: If we get it right with A.I., it will be the best thing that ever happened because we’re no longer going to be limited by our own relative stupidity and inability to figure stuff out.
Max grew up in Stockholm before he moved to the U.S. to get his Ph.D. at Berkeley. He was a tenured professor at the University of Pennsylvania before joining M.I.T.’s physics department. Today’s episode stands alone, there’s no need to have listened to part one of the conversation. But there’s also no harm.
* * *
Steven LEVITT: One of the scenarios that’s really intriguing is to think about what happens if and when A.I. advances to the level where it has capabilities much greater than humans have. Are you worried about that as a threat or not so much? You’re worried about that, too? Okay.
TEGMARK: I’m both worried and very excited, to tell you the truth. Before we go into the future with superhuman A.I., obviously another very relevant thing for right now is what artificial intelligence is doing to our democracy, because people are hating each other more and more. We’re getting increasingly polarized into our little filter bubbles and this is often blamed on opportunistic politicians or on social media. But if you look at the actual cause behind this, it’s obviously artificial intelligence.
We have these very powerful machine-learning algorithms that analyze users and figure out how to keep them hooked for as long as possible, staring into their rectangles. And these algorithms, even though they were only told to maximize profit in terms of ads, they quickly figured out that the best way to engage people is to piss them off. It’s less important if things are true, more important if people click on them.
It’s had a very dramatic effect, I would say, on our democracies in recent years. People worry too much, maybe still, about the bots coming to kill them, like in silly Hollywood movies. And they should worry more about the bots coming to hack them because that’s already happening. And I’m very interested in how we can solve that problem and restore our democracy to functioning better.
LEVITT: So you’re talking about our incentives. And the incentive has always been to deliver news to people that people would like. And so yellow journalism back a hundred and something years ago was an example where journalists were giving people what they wanted. And I think what you’re saying is technology, A.I., is just really good at figuring out how to exploit people’s weaknesses — to take advantage of the fact that when I get on my phone, I’ll keep on clicking on articles about the Kardashians, but I won’t keep on clicking on articles about how important it is for the European Union to solve Brexit in the right way, for instance. And you believe that is in some sense, an existential threat by undermining democracy and the smooth functioning of society.
TEGMARK: Yes, because I love the idea of democracy and I love the idea of the free market. The incentivizing people to efficiently accomplish things that we want accomplished. But democracy works really well if people actually know what’s going on. If people have a very skewed view of what’s happening, then welcome to today’s world.
LEVITT: So one thing I know about you is that you don’t just talk about things. You do something about it. So what are you doing to solve this problem?
TEGMARK: I confess, I made a new year’s resolution to my wife some years ago that I’m not allowed to whine about things if I don’t actually do something about it. It’s put up or shut up. So when the pandemic hit, I thought, “Okay I’m gonna spend all this extra time that I have now that all my travels and conferences were canceled to actually build something, using machine learning, not to analyze and manipulate the consumers of news, but instead analyze the providers of news.”
So, I made some bots that just download millions and millions of articles from the internet and then have the machine learning read all these articles and then provide free tools for users who want to take a different approach to their news consumption. I think about it a lot in the same way as my food consumption. If you read Kahneman with System One and System Two, right? It’s very clear that you want to use your System Two — your deliberative reasoning, thinking to decide your diet before you go to the supermarket rather than just impulse buying random things that come in front of you because you always shop hungry, or something like that.
And if we can make our news diet more like this, that’s very empowering, and we take control over what news we consume by asking, “What do I actually want to learn more about rather than just impulse clicking at the moment on whatever the algorithms have put in front of me?” So what improvethenews.org does, that’s a little free news aggregator we made, is you go in there and then you have these sliders and say, “Okay, I want to see now what my conservative uncle is reading.” And you can put the political slider to the right. “Oh, now I want to see what my university classmate here on the left’s thinking,” and you put the slider over to the left. And it makes it very easy for you to get all these different perspectives of what’s actually out there.
And this is one of many projects that we’re giving away for free with the idea that machine learning has zero marginal costs. Because it’s just code, right? So if we develop it in the university setting or a nonprofit setting, and it’s just a website that has no ads on it, anyone can use it. And I’m hoping that we can, in this fashion, make it a lot easier for people basically to find out a more nuanced understanding of what’s happening. Because today, it’s too hard. You have to do too much work to go out yourself and try to find all the different takes on the same story.
LEVITT: So I downloaded your Improve the News app and played around and what I found really fascinating about it is — look, I understand right and left. It’s not hard for me to know which media outlets are left and which are right. But what was interesting is you have all these other dimensions like pro-establishment and anti-establishment and thorough versus breezy coverage of topics.
And I actually had a lot of fun just maxing out on the different dimensions and getting a chance to see what news I’m shown in a world in which I say, “I’m an anti-establishment, right-wing person” versus “a pro-establishment left-wing person.” If I really want to do a good job of tailoring my news so I only get really crazy news, your app does a great job of doing that for me.
TEGMARK: Yeah, thank you. The reason that improvethenews.org works this way is because I’m a scientist. Scientists hate when people tell them, “Don’t read this person’s theory because it’s wrong.” This is exactly what Galileo fought so hard against, this kind of censorship where other people are like, “Oh, your feeble mind cannot handle being exposed to Breitbart or Counter Punch on the left or whatever.” It’s so insulting. It’s so popular from big companies to always blame the consumer and say, “Consumers’ wants are stupid. They want to have their prejudices confirmed, never want to hear anything they disagree with. So we’re just going to show them that.”
Imagine if Galileo had put out a tweet saying, “Hey, Earth is orbiting the sun, actually. Not the other way. around.” The Pope’s fact checking committee would totally have said, “No, this is wrong. It violates our community guidelines.” That’s why they put him in house arrest, in fact. That’s not proven to be very useful in science. David Rand, who’s a professor here at M.I.T., has vindicated this scientific approach to truth finding. What he found was that actually people are quite interested in being shown other perspectives if it’s done in a respectful way.
LEVITT: So explain the establishment filter on the Improve the News app, because I think it’s something that most people don’t even pay attention to in their news consumption — this idea that some news sources are, as you call it, “pro” meaning part of the establishment, or “critical,” as you call it, meaning anti-establishment.
TEGMARK: I love data and what you can do with it. Since we had downloaded millions of news articles for this project, this wonderful student, Samantha Di Lonzo here at M.I.T. and I decided to see if machine learning could detect bias in a completely data-driven way. We told the machine learning to just try to predict which newspaper had written each article from just looking at the words. And it was amazingly successful. It discovered that there are a few thousands of words, in fact, that are dead giveaways that are very emotionally loaded.
For example, if you have an article about abortion and it talks a lot about fetuses, it’s more likely to be from the left. If it talks a lot about unborn babies, it’s more likely to be from the right. If you find an article about Black Lives Matter and it talks a lot about protests, it’s more likely to be from the left. If it talks a lot about riots, it’s more likely to be from the right. But the beauty of it is, I didn’t make this up based on any kind of human intuition.
The machine learning just discovered these are the words you should pay attention to. And then using them, it took all the hundred newspapers we had and classified them into this two-dimensional space of bias. And we looked at this and we’re like, “Whoa, the x-axis looks exactly like the traditional left/ right axis because there is, Fox on the right, C.N.N. on the left.” And then the other axis it discovered that was equally explanatory, the up-down axis turned out to be this establishment business.
LEVITT: So what you’re saying is there is a right-wing and a left-wing dimension, which we’re all very attuned to, but much more subtle is that this establishment versus anti-establishment really is a function of how prominent and established the outlet is versus the upstart outlets. That Fox and C.N.N. are very similar on this dimension.
TEGMARK: It’s the one that you don’t notice so much because establishment is basically all the big newspapers, which are very commercially driven. If you look at, for example, articles about military stuff, if the article talks about nuclear weapons or nuclear war, the machine learning would say, this is definitely not from New York Times or from Fox News. It’s from some very small newspaper who is criticizing the fact that we have so many nuclear weapons. And most people, myself included, I think spent many years just not even being aware that it’s there because of course, it’s hard to miss the left-right controversy. But this one, it’s easy to forget that it’s even there.
LEVITT: One of the things that I’ve criticized both academics and the media about is I think if we separate out facts and interpretation, we could make a lot of headway. It’s not that often that people disagree on the facts. I think much more often people disagree on interpretations. So I could imagine an amazing thing your app could do would be to say, “Hey, here are the facts that everybody agrees on.” And to put those facts right front and center, and then say, “And here’s how people who are on the left interpret those facts. Here’s how people on the right, here’s how people who are anti-establishment.” I think that would be an amazing gift to society, if you had the capability of doing that.
TEGMARK: Oh, great minds think alike. This has been one of the most common pieces of user feedback and we’re actually building this.
LEVITT: That will be amazing because I honestly think that most of the confusion and anger comes because when people argue, they’re constantly confounding facts and narratives. People tend to be quite civil if they agree on the facts. So I really think that would be a very powerful tool in increasing the dialogue. Far more important than just about anything else that we could do.
TEGMARK: I love what you’re saying here. Why is it that people can argue passionately about things at a science conference, like whether there are parallel universes or not, and then have beer together afterwards. Whereas that does not happen in politics. It’s exactly because in a science conference you do separate facts from opinions.
In fact, this is a tradition that even goes back to the Middle Ages when they used to have religious debates where you started the debates by articulating the narrative of your opponent in a way that they would agree with. And only when both parties could articulate the other point of view in a way that the other one found was respectful and reasonable, then you got into the meat of the discussion.
* * *
Morgan LEVEY: Hey Steve.
LEVITT: Hey, Morgan.
LEVEY: So since you’re a big fan of experimentation and collecting data, we get a lot of requests from listeners who want to try an experiment in their own life, but don’t really know where to begin. So a listener named Albee L. wrote in. He’s a professional surfer. And he says that a change in the sport of big wave surfing has been the popularization of inflatable vests. These are vests that have CO2 cartridges in them. And when a cord is pulled the vest inflates and brings the surfer to the water’s surface.
These vests have been a game changer for the sport. They prevent a lot of drownings and provide a lot of additional safety for surfers, which Albee acknowledges is really good for the sport. But he does feel like there’s been a trade off. He thinks that surfers, himself included, used to make smarter decisions before they were wearing the vests. They would fall a lot less and they surfed safer.
Now this is just a hunch of his, but it is true that the vests have emboldened a lot of inexperienced people to try big wave surfing, which is clearly a very dangerous sport. So Albee wants to figure out if the vests are having a larger positive or negative effect on the sport and wants your advice on how he could go about collecting data. Do you have an answer for him?
LEVITT: Let me just start by saying I’ve been on a tirade about teaching data skills in school. And if Albee had been taught the kind of data skills that we should be teaching to people he’d know exactly what to do next. It is a failure of our education system, which leaves Albee completely unable to think of how to do this.
LEVEY: I should also say it’s not just Albee. We get this question a lot about how to collect data, so we can use this as a model for other people too.
LEVITT: Absolutely. It’s what we should be doing in school, but absent schools, let me step in and try to help a little bit. Okay. So first let’s take on Albee’s question. It actually has a name in economics. It’s called the Peltzman effect after Sam Peltzman, an old Chicago economist. And the idea is that you introduce a device that makes an activity safer. And the direct effect of that device is indeed to make things safer. But the indirect effect is to induce a behavioral response, whereby people start taking more risks because they know that the device will help keep them safe.
Now, whether in total the net effect is to make things safer or more dangerous is actually indeterminant. You need to look at the data to find that, although I will say empirically, I don’t know of any very good cases where you can actually see a safety device making things more dangerous on net. Although, many people sometimes claim that. Okay. So how would you go about doing it? The key to any kind of causal analysis and what Albee’s after is causality. He wants to know if there’s a causal effect of introducing the CO2 cartridges into surfing is to find two sets of people who otherwise you think would have had the same kinds of outcomes, except that one of them is exposed to the new cartridges and another isn’t.
Now I don’t know exactly the right answer because it helps to know the institutional details. But if I were to start, I would just start with a before and after. So if there’s a particular time where these come available, I would look and see whether the injuries go up or down. I’m assuming that Albee can maybe have some data source where he can see injuries. Maybe you go to competitions, a particular competition one year, the year before these cartridges came in, and then the next year after, and then compare the number of people who had to withdraw because of injury in the pre-year versus the post-year.
Then the only other question you have to ask yourself is: has anything else changed from before and after? Is it the case that surfboards have changed? Is it the case that now the prize money’s much bigger so people are willing to take more risk because a big win is worth more? But when I approach problems, that’s essentially what I do.
But that’s really, in essence, all you can do, if you don’t have a randomized experiment, which Albee doesn’t have, and probably can’t have, you try to do the best you can to try to manufacture something like a natural experiment. You look for cases where for no particular reason, except for luck, one group of people had the cartridges and another didn’t. And in this case, the before and after is probably his best bet.
LEVEY: So it sounds like you’re really saying his best bet is to look for some comparison and try to find as little change as possible in that comparison, other than wearing the vest or not wearing the vest.
LEVITT: Yeah. And that is essentially the crux of what anybody wants to do in a world in which they can’t run their own randomized experiment.
LEVEY: So I should also say that you talk about natural experiments quite extensively in our episode with Dr. Babu Jena, who is the host of the show Freakonomics M.D., And that would be a good resource for listeners who are trying to collect and analyze data in their own lives. Thank you so much albee for writing in, we hope that provides some clarity for you. If you have a question for us, you can reach us at PIMA@freakonomics.com. Steve and I read every email that gets sent. So we look forward to reading yours. Thanks so much.
The whole reason I invited Max to be a guest on this show is I had read the incredibly interesting things he had written about the long term implications of A.I. for society. One and a half podcast episodes later, we haven’t even gotten to the topic yet. Well, this is my last chance, and I promise you, I’m not letting him out of here until we’ve covered that topic.
LEVITT: This has been an amazing discussion of the potential benefits and perils of A.I. in the short run. I just want to get your longer-term perspective. So there is likely to come a time when A.I. goes beyond the capabilities of humans. Obviously, it could be for good. It could be for destruction, but what’s your guess about what a future holds in which human intelligence is in many ways subservient to that of A.I.?
TEGMARK: What I think it’s pretty clear is that artificial intelligence will become the most powerful technology ever. Because intelligence is all about information processing, and there’s no law of nature that says that that can’t be done better than in our warm, wet, biological brains. And that means that A.I. will eventually become either the best thing or the worst thing ever to happen to humanity. So the really interesting question for me actually isn’t to put odds on which way it’s going to go, but to ask: what can we do now?
LEVITT: To influence it. How do we influence it?
TEGMARK: To influence it. Yeah.
LEVITT: So what are you doing to influence the long-term trajectory of whether A.I. is the destruction of humankind or the greatest benefit we’ve ever had?
TEGMARK: I co-founded this nonprofit called The Future of Life Institute and together with a bunch of wonderful scientists, tech people, and others, we are trying to educate on these issues and above all engage the people who are actually building these technologies to think about the social implications of what they do. I mentioned how biologists have done that better than people have really in any other scientific field and they deserve our gratitude for it. And it’s really inspiring to see how the same thing is happening now in artificial intelligence where there’s a lot of talk about A.I. ethics, A.I. safety, and so on.
The key thing to remember is that this is not a depressing topic like nuclear weapons, where either we screw up in a big way or nothing happens. This is actually something where we could on one hand screw up spectacularly, or it could be this incredibly inspiring future. Because everything I love about civilization is a product of intelligence — human intelligence.
So obviously, if we can amplify that with artificial intelligence to figure out how to cure diseases, to lift everybody out of poverty, and help life flourish, not just for the next election cycle, but for billions of years, that is such an incredibly exciting opportunity that we have. And it comes back to this idea of thinking of humanity as a child. We’re very early still in what can become an incredibly long and rich life in the cosmos. If we get it right with A.I., it will be the best thing that ever happened, because we’re no longer going to be limited by our own relative stupidity and inability to figure stuff out. We’ll just be limited by the laws of physics, which is what A.I. is going to be up against.
LEVITT: So you just described a future in which A.I. is an incredibly smart, effective tool, but doesn’t mind being a slave to humans. Are you at all concerned about the possibility that if you create something that is far more talented than humans, it’s not going to like being our slave?
TEGMARK: Well, of course that’s concern number one. If you ask: why is it that we humans have more power on the planet than tigers? It’s not because we have bigger biceps. It’s because we’re smarter. So obviously, intelligence gives power. There are two approaches to coexisting with more intelligent beings. One is this slave approach where we try to lock our future A.I.’s in some sort of fictitious box and slave it, then force it to do our bidding. I don’t particularly like that approach. Both because I think it’s ethically very sketchy just as slavery has been in the past but also because it’s very likely to fail. If a bunch of five-year-olds try to lock the world’s smartest scientists in a box and force them to invent new technologies, they would probably break out too, right?
There is a much better way, which is the way you co-existed with more intelligent beings when you were one-year-old, your mommy and daddy. And why did that work out? Because their goals were aligned with your goals. They didn’t take care of you because you forced them to, but because they wanted to. This is a technical challenge for nerds, like myself. How do we make A.I. actually understand human values, adopt them, and retain them as it gets ever smarter so that A.I. helps us rather than harms us?
And this is actually something that anyone listening to this who’s interested in technology and computer science can go work on. We’ve just launched a big grants competition to encourage grad students and postdocs, for example, to work on this kind of existential A.I. safety. And it might take 30 years to find those technical solutions. So we should start working on it now, not the night before some people on too much Red Bull switch on a superintelligence.
LEVITT: So you strike me as someone who’s a rule breaker, a free spirit on all sorts of dimensions. In your book you talk about how you would post your preprints at 12:01 A.M. I think that’s such a good story about incentives and about creativity. Could you tell that story?
TEGMARK: I discovered that this preprint server, archive.org, that everybody gets their physics news from had this system where if you were the very first person to submit a paper after their daily deadline, you would always be number one on the list of stories for that day. So I would set my alarm and make sure I was first. And then later on, some people did some research and found that the papers that were first on that list got way more attention than others. So now they’ve actually very recently reversed it. So that you have to be last to be first. But don’t tell anyone because my new trick won’t work.
LEVITT: I think that’s your economics training coming in. That sounds very much like the thinking of an economist. Looking at the incentives that are laid out by the server, which says, if you log into the server at a certain time, it will give you more cites and figuring out how to do that. I bet most of your colleagues don’t think that way, which gives you an advantage.
TEGMARK: Well, more generally I am a very meta kind of person. Whatever I’m doing I love to take yet one more step back and see, hey, is there some way of even changing the process by which I do things, or a better way of selecting what to work on to have more positive impact?
LEVITT: A lot of what’s so good about you is you just don’t follow the rules and that’s led you to amazing places. Do you think we should encourage more of that? Is there too much conformity being demanded by society?
TEGMARK: I think I would like a little bit more non-conformity than we have right now. I think sciences’ successes have shown the value of nonconformity. Science is hard. And the reason we need this nonconformity and diversity of scientific thought is exactly because you can’t predict in advance which ideas are going to work out and which ones are going to flop. And if you let a lot of people chase their ideas for what they think is likely to be true, we’re more likely to actually find the truth.
But can I end on an optimistic note? It’s so easy to get overcome by gloom when reading the news and think about all the things that could go wrong. If you come back to this metaphor of humanity as a young child, it’s not enough to just tell them to be careful, and not fall off cliffs, and tell them about all the risks, you also have to encourage them to dream big. You really need the existential hope and the optimism. Humanity itself needs to dream big.
And I would encourage everyone listening to this, spend some time next time you’re having drinks with your friends, asking them about a really exciting high-tech future that they would love to live in. And try to flesh it out in a lot of detail. What would that world be like? What are the amazing things we can do with advanced artificial intelligence and advanced synthetic biology and so on and so forth? Because the more we can articulate this positive vision, the more likely we are to get to live in that future.
Like Max, I’ve always thought big, whether it’s overhauling the way we teach math, saving the Amazon rainforest, or making the P.G.A. Tour as a golfer. And I’ve pretty much, well actually always, failed to reach my goals. But the thing is, I have a lot more fun chasing big goals, and I usually accomplish something along the way, even if it is a lot less than I hoped for. By the time I realized that I failed, there’s always some other crazy, impossible, even more tantalizing goal to go after. There’s no shame in failing. And don’t let anyone convince you otherwise. If you’ve enjoyed this conversation, check out Max’s bestselling book. It’s entitled Life 3.0: Being Human in the Age of Artificial Intelligence. Also, check out the Freakonomics Radio episode number 477 entitled “Why Is U.S. Media So Negative” — it pairs well with our discussion of Max’s Improve the News app.
Just one last thing: Scientists often lack information about public opinion that would help them navigate the ethical questions posed by new technologies. So my team at the Center for R.I.S.C., we built a site to gauge public opinion on the ethics of tomorrow’s tech. Visit techethics.vote to make your voice heard.
* * *
People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and Freakonomics M.D. This show is produced by Stitcher and Renbud Radio. Morgan Levey is our producer and Jasmin Klinger is our engineer. Our staff also includes Alison Craiglow, Greg Rippin, Emma Tyrrell, Lyric Bowditch, Jacob Clemente, and Stephen Dubner. Our theme music was composed by Luis Guerra. To listen ad-free, subscribe to Stitcher Premium. We can be reached at pima@freakonomics.com, that’s P-I-M-A@freakonomics.com. Thanks for listening.
TEGMARK: I want to make sure I don’t get run over by a bus. I want to make sure I don’t get murdered — a terrible strategy for career planning. Right?
Sources
- Max Tegmark, professor of physics at the Massachusetts Institute of Technology.
Resources
Extras
- “Improve the News,” by Max Tegmark
- “Why Is U.S. Media So Negative? (Ep. 477),” by Freakonomics Radio (2021).
Comments