Search the Site

Episode Transcript

Steven LEVITT: It’s been more than 30 years since I first read a book called The Evolution of Cooperation by Robert Axelrod. The only reason I even read the book was that it was assigned reading for an economics course I was taking and I was a diligent student. So I always did the assigned reading. But I was never inspired by the material until that is, I read The Evolution of Cooperation. It’s a book about game theory, specifically, something called the prisoner’s dilemma. It captured my imagination. It changed the way I thought about the world. It made me think for the first time, ‘Wow, maybe I should do academic research.’ I’m so excited today to be talking to its author, political scientist Robert Axelrod, for the very first time. 

Welcome to People I (Mostly) Admire, with Steve Levitt.

LEVITT: The prisoner’s dilemma is one of those things that seems really simple when you first hear about it, but there’s much, much more to it than most people realize. I’m not exaggerating when I say that the prisoner’s dilemma has become a guiding principle for the way I live my life. But, I know from trying to teach it to my students that it’s actually a really hard idea to wrap your head around — it’s very counterintuitive. I’m hoping that, with more than three decades of experience, Robert Axelrod will be way better than me at explaining it. I am a little worried though because if there’s one thing I’ve learned from this podcast, it’s that many experts have a shockingly hard time explaining to regular people what they actually do. So, let’s see how it goes. 

*      *      *

LEVITT: Roughly 40 years ago, you had the idea to run a little tournament for 13 academic game theorists. Did you ever imagine at the time that this would launch a research agenda that would get over 50,000 academic citations and produce a best-selling book?

Robert AXELROD: No, I just did it for fun at first. 

LEVITT: And just to put into perspective — I’ve had a pretty successful academic career. And this one little idea of yours has gotten more citations than all of my research papers combined. And let me just say, I read your book in college. And it was one of the few things I ever read for a class that blew my mind. But to even start talking about what I found so exciting in your book, we first have to give folks a little crash course in game theory, and specifically, your little tournament focused on something called the prisoner’s dilemma, which I’m sure most listeners have heard of, but probably haven’t thought very deeply about. 

AXELROD: Well, let me use the original example where two criminals are arrested by the police. So, the police separate them and say to each one, “If you confess, we’ll give you a lighter charge. And if you don’t, then we’ll punish both of you.” And the idea is that each one of them has an incentive to defect by turning state’s evidence. But if they both do that then the police don’t need either one of them. If they cooperated with each other and kept their mouths shut, they would be better off and they would just get a lighter charge. And so, they are better off both cooperating, but each has an incentive to double-cross the other. 

LEVITT: Okay. Robert, can I tell you honestly? That sounded a lot like the explanation I give. I think that you and I, buried in this area, have a really hard time getting far enough away to explain it. 

Morgan LEVEY: Hey, listeners, Morgan here, the show’s producer. Steve’s right, I think he and Robert aren’t explaining the prisoner’s dilemma very well. That’s because it’s really hard. So as a nonacademic, let me try explaining it.  The prisoner’s dilemma is a hypothetical scenario in game theory. So, pretend Steve and I are both in police custody, we’ve robbed a bank, but the evidence the police have against us is weak. The police put us in separate rooms and they come and talk to me. They explain there are four possible outcomes to our situation and my punishment will depend on what I choose to do and what Steve, separately, chooses to do. Two key pieces of information you need to know: I don’t get to talk to Steve and I only have two courses of action, rat him out or stay silent. So, scenario one: I rat him out and he says nothing. In this case, I’m defecting from my partnership with him and I’ll get to walk free. He’ll go to jail for 20 years. Scenario two: Neither of us rats out the other. We are cooperating with each other and due to the weak evidence, we’re only sentenced to one year of jail time. In the third scenario, we both rat each other out, we both defect from our partnership and both go to jail for 10 years, less than the 20-year maximum, as a reward for agreeing to testify against each other. And in the last scenario, I say nothing and he rats me out. I go to jail for 20 years and he walks free. The police leave and I’m left to think through my options. So, what’s the best course of action for me

AXELROD: Each player is better off defecting, double-crossing the other one, acting on their own interests only. But if both of them do that, they do worse than they could have accomplished if they had cooperated. So, from a game theory perspective, defection is what’s called a dominant strategy, meaning it’s the best thing to do, no matter what the other guy does. If the other guy cooperates, then you can exploit their cooperation by defecting. And if the other side defects, then you’d be a sucker to cooperate. And so, if you play the game just once, both players say to themselves, “No matter what the other guy does, I’m better off defecting.” And that leads to a non-optimal outcome for both of them. That’s why it’s called a dilemma.

LEVITT: Exactly. So, the best thing we can do collectively is cooperate. But we can’t do that because our private incentives get in the way. And that leaves us stuck at the only other outcome, which is we both defect, and we both get a pretty bad payoff, but not as bad if we were made the sucker by the other player. So, that’s the setup of the game. You’re basically caught in a trap if you play it once. But the interesting thing is that everything changes if you play the game over and over. So, can you explain that?

AXELROD: The original analysis that I read when I was 17 in high school said, “If you know when the game is going to end, you defect right from the beginning. If you don’t know when the game is going to end, or if you’re — some indefinite future, then all kinds of interesting possibilities arise.” For example, you might try to cooperate and see if the other guy does. And you might try to cooperate twice and then respond to the other guy’s defection by defecting five times. There are all kinds of possibilities of using the history of the game when you’re in the middle, to decide what to do in order to try to figure out how you can maximize your score. What you really want to do is get the other guy to cooperate. So, you want to elicit cooperation. 

LEVITT: So, the key thing about playing this game over and over is that you use the future rounds as a way to punish someone if they defect on you now. So, if you only play once, you have no method of punishment. But if you play it over and over, you can keep someone honest by having a strategy where I’ll cooperate with you as long as you cooperate. But if you screw me over, then I’m going to punish you for that. I’m going to stop cooperating. And so, still, with everybody completely selfish, completely self-interested, you can find what game theorists call an equilibrium in which, because of the threat of future punishment, we’re able to get the really good payoff that comes with cooperation. 

AXELROD: I call that the shadow of the future, the idea that the future can affect what you’re doing now. 

LEVITT: One of the fascinating things about the repeated prisoner’s dilemma game is that there’s no one obvious best strategy for how you should play it. It really depends on your expectations about how your opponent will play and about their expectations about how you will play. So, theory doesn’t give us an answer. And so, you went out and you decided to gather data. 

AXELROD: I thought I should ask people who are familiar with the game and familiar with game theory how they would play the game and how they would play the game precisely enough that you can write a computer program to implement their advice. That reminded me of computer tournaments for chess where each player is trying to devise a computer program strategy to play as well as possible. And so, I reached out to about a dozen academics mostly who had actually worked with the prisoner’s dilemma, published papers using it, and said, “How would you play? Assuming that you’re going to have this indefinite future and the other player is pretty smart too, but you’re both selfish. What would you do?” 

LEVITT: So, your tournament had 13 participants, I think. Did you have to reach out to many more game theorists than that to get them to play? Or was everyone eager to play your game? 

AXELROD: Virtually everybody was eager to give it a try cause they all thought they knew best.

LEVITT: Yeah, I was going to say…  

AXELROD: And I promised a trophy for the one that scored highest.

LEVITT: So, you reached out to them, and you said, what you have to provide to me is a computer program, an algorithm — basically, you build a little bot. And your bot is going to play the prisoner’s dilemma against all of these other game theorists. You play this for, like, hundreds of iterations of the individual game. And the winner is going to be the one whose bot, at the end of a round-robin tournament, playing against all the other entries, has scored the most total points. 

AXELROD: Right. Just one thing — it’s important to avoid words like “opponent” and “trying to defeat the other guy,” because that evokes zero-sum thinking. And this is not a zero-sum game. We all tend to fall into zero-sum thinking whenever there’s any kind of rivalry or anything that looks like competition because zero-sum thinking is the easiest way to do it. And it’s wrong and self-defeating in many contexts — in fact, almost all contexts except say sports and all-out war.

LEVITT: I stand corrected. I used bad language and probably I’m going to use bad language again. And you correct me every time I do that. So, one of the things I find interesting about the setup of your tournament is it’s not that these game theorists are there in the room playing against each other, adjusting their strategies in real-time. They have to commit to something ahead of time. And it feels a little bit like a genetic code. It’s like, nature sets off a species with a genetic code. And then, it competes in different environments against different players over time. I want to make sure I don’t say competitors or opponents or anything like that. I think of the prisoner’s dilemma in game theory as being basically economic concepts, but really, an economist would never approach this problem the way you have. They would never make it algorithmic and set up strategies to compete. They would always think of bringing individual humans into a lab and playing the game against each other. And I think one of the reasons that your results have had such a broad appeal is the prisoner’s dilemma is really one of those rare areas of social science that’s intrinsically interdisciplinary. It spans economics, evolutionary biology, political science, math, even. So, I really find that interesting, that the perspective you brought to it — and you’re not an economist — is totally different than the perspective that an economist would have brought to it. 

AXELROD: The economists would have also, possibly, and they have — studied what the equilibrium’s possibilities are. In other words, if two players are doing given strategies, under what conditions would they have no incentive to change their strategy? And the problem with this approach, in this case, is there are a lot of different strategies that would be in equilibrium. And so, there’s a problem of how do you distinguish those? And the tournament provides one way to analyze what works well in a variety of settings. 

LEVITT: So, before you get into the findings, the insights, how about we start with the simplest possible strategies? One strategy would be to always cooperate. And it’s pretty easy to see that this strategy could fare disastrously against many opponents because —

AXELROD: No, not opponents. 

LEVITT: Okay. My mistake. It’s going to take a lot of chiding and discipline to get me off of that language. But explain why this is going to be a terrible strategy. 

AXELROD: If the other player tries to defect and then finds that you cooperate anyway. And they repeat that until they’re really convinced that you always cooperate no matter what. Well, then they might as well always defect, no matter what. And so, you’ll always get the lowest possible payoff once this gets going.

LEVITT: So, the other really simple strategy is to never cooperate. Every round you do what gives you the highest payoff which is to not cooperate. So, you play the repeated prisoner’s dilemma, just like you would play a one-shot game. Did any experts submit that strategy? 

AXELROD: No, I think they appreciated that was just going to get them to be a situation where the other player eventually will learn that they’re never cooperating. So, why should I cooperate? And that’ll lead to a situation where both sides are always defecting, and that’s not a very good payoff.

LEVITT: So, what was the simplest strategy that anyone submitted? 

AXELROD: Well, the simplest one is called tit-for-tat. It cooperates on the first move and then does whatever the other guy did on the previous move. 

LEVITT: Okay. So, it’s two lines of code. In a lay person’s words, what is tit-for-tat? 

AXELROD: It’s reciprocity. It’s saying, “I’m willing to cooperate if you are, and if you’re not, I’m going to defect.” And so, I’m just going to echo what you do and maybe that’ll get you to realize that you’re better off cooperating with me because then I’ll cooperate the next time. And then, we could both do pretty well.

LEVITT: Okay. But a seeming weakness of tit-for-tat is that it’s very reactive. It never probes the other player to see whether he or she is a pushover by instigating a noncooperative action. And it’s also super forgiving. 

AXELROD: But the forgiveness is quite limited. If you defect once, I’ll forgive. But I won’t keep forgiving no matter what you do. 

LEVITT: Exactly. So, I forgive you as long as you’re nice to me, but it is a very understanding strategy, right? Because someone I’m playing against can defect 50 times in a row. I will defect also. But as soon as that person says, “Okay, now it’s time to cooperate.” Tit-for-tat says, “Okay, great. I’ll cooperate too,” which is quite different than human nature. Most humans, if they’ve been defected against 50 times, and then someone’s nice to them once, [they] are not going to be so forgiving as tit-for-tat. That’s my intuition.

AXELROD: You may be right. But it still pays to do that, to see if you can’t get out of this rut that you’re in. And you said that it was a very understanding strategy. I would say the opposite. It has very limited cognitive ability. It could remember one move and react — that’s all, the calculations are trivial. 

LEVITT:  So, there are strategies similar to tit-for-tat that are much less forgiving. And, they have this great name that economists call the grim reaper strategy — this massive retaliation strategy. Can you explain why this might seem like it could be pretty good in the context of the repeated prisoner’s dilemma? 

AXELROD: One of the submissions was maximal punishment so that if you defected even once, I’ll never cooperate again. And that would seem to give you a really strong incentive to cooperate. However, the trouble is in this context, you can’t communicate that. And so, if the other player does any exploring and maybe defects once, it’s all over. And then you’ll both get the lower score for always defecting. So, while this massive retaliation seems like a good idea because it gives the biggest incentive for the other guy to cooperate, in this context where you can’t talk and you can’t publicly commit to it, it’s a very ineffective, dangerous strategy.

LEVITT: Okay. So, the strategies we’ve talked about so far, the tit-for-tat or this grim reaper strategy, both of them have the characteristic that they never defect first. And I think you gave a name to that. You call those strategies “nice strategies.” 

AXELROD: Yeah, I couldn’t find another word in English that says, “Don’t be the first to cause trouble,” or something like that — so I just called it “nice.”

LEVITT: So, then there’s another set of strategies that are not nice. So, can you describe an example of a not-nice strategy? 

AXELROD: Well, a strategy might start with a defection. And then, if the other side on the next move cooperated anyway, they might defect again and wait until the other side defected before they decide to change their mind. And so, this would be sort of exploratory and see what they can get away with.

LEVITT: Yeah. So, there’s a whole bunch of strategies that capture the notion that I’m willing to cooperate if you show me that you’re tough, but until you show me you’re tough, I’m going to exploit you as much as I can. 

AXELROD: Yeah. Let me say — when I set this up, with the chess-playing analogy in mind, I thought that the best strategy would be pretty complicated cause it would have to take all these considerations into account. So, for example, several players did variations to try to learn what the other guy’s doing. And then guess what their strategy is and then do an analysis to figure out what’s the best strategy to use from now on if your beliefs are correct. And even better is if it keeps updating its beliefs based on what the other guy does. So, it’s always, modifying what its beliefs are based on the new experience. And that’s a pretty complicated thing to do. But I imagine that complicated things might work well because they could take a lot of aspects into account. 

LEVITT: So we’ve talked about a handful of different strategies, but before we reveal what strategy actually won the tournament, I want to pause and give listeners a chance to think for themselves. What kind of strategy do you think is going to work? Do you think it’s going to be a simple strategy or a complicated one? Is it going to be a tough strategy or one that’s nice? Or maybe none of the strategies we’ve even talked about it all, something totally different. What I do know is that before I knew the results, I had a very strong opinion about which strategy would win, and it turned out to be completely wrong. 

LEVITT: Okay, so Robert, tell us who won the tournament.

AXELROD: I calculated every strategy playing every other strategy. And the one that got the highest score was the simplest one submitted, the tit-for-tat strategy. 

LEVITT: Tit for tat took home the trophy. 

AXELROD: Took home the trophy.

LEVITT: The two-line program took home the trophy — who submitted that? 

AXELROD: Anatol Rapoport was a professor of peace research. And he submitted it, but he warned me in a letter that he really wouldn’t recommend it in public use because there’s so many other complicating factors. Nevertheless, in this context, it does really well. 

LEVITT: And of the ones that did well and the ones that did poorly — what characteristics led to success or failure? 

AXELROD: Well, it turns out the most important characteristic is to be nice, that is to say never be the first to defect. Just keep cooperating as long as the other guy does. And that means that you don’t start trouble. Another characteristic is it pays to be forgiving. It pays to not keep defecting for a long time after the other guy did. And of course, tit-for-tat is maximally forgiving, but only in the short run. One more characteristic is provokability. You should be provokable. In other words, you can’t afford to be a sucker all the time. You have to get mad, defect when the other guy defects because that teaches them a lesson. And if you’re not provokable, then you’ll be a sucker, much too much. The effective strategies are nice, forgiving, and provokable.

LEVITT: Which seem like they’re at odds, but they’re not. Because niceness is about, “Am I ever a jerk to you without you asking me to be a jerk?” And being provokable means, “Look, if you’re a jerk to me, I’m going to come after you.” And those of course are two different characteristics of the strategy. So, when you saw the results and you saw that tit-for-tat won, were you in shock, mildly surprised, not surprised at all? 

AXELROD: Well, I was pretty surprised. I expected, as in chess, that it would take real sophistication to do well in this game. So, I was really surprised when the simplest of all of them did their best. And then I wondered, is this a fluke? And so, I thought that it’d be good to get a lot more entries to see what a lot of other strategies might be and what would happen under this much bigger context. And so, what I did then is I advertised in computer hobby magazines and it was just a one-page explanation, of the prisoner’s dilemma and how to send in an entry. With that, I got 62 entries, including, some kids and some more professors. And they came from a lot of different disciplines. And that was delightful. 

LEVITT: And it’s also true — the first time you played the tournament, nobody knew what the results would be. And for the second tournament —

AXELROD: I told everybody.  In the advertisement soliciting entries, it says that tit-for-tat did best. 

LEVITT: And so, you ran the second tournament now with 60-something players. How did the strategies look different? 

AXELROD: Anatol Rapoport submitted the same one, the tit-for-tat. 

LEVITT: Do you think he was confident that tit-for-tat would have a good showing the second time around? 

AXELROD: I certainly wasn’t confident. I doubt if he would be. Because we knew that these other entrants, some of whom had a lot of time to think and work on this, and others from professions, had a lot of experience with this sort of thing and game theory analysis. I suspect that he thought as I did that something else might do better.

LEVITT: Especially because people knew that tit-for-tat was the winner. And so, they were designing strategies that were gunning for it.” I have to beat tit-for-tat. And so, I put bells and whistles in my strategy that will be especially effective in fighting tit-for-tat.” 

AXELROD: But they also knew there would be a variety of others and tit-for-tat would be just one of the players that they would meet. And so, many of them tried to exploit other strategies to try to find the weaknesses in other possible entries. So, they were out-guessing each other, too.

LEVITT: This is classic game theory — one set of people say, “I have to beat tit-for-tat because that’s what won.” A second set of more sophisticated players say, “I know a bunch of people are going to be out trying to beat tit-for-tat, so what I need to do is build a strategy that will exploit those strategies that are trying to beat tit-for-tat.” 

AXELROD: But again, the word exploit—

LEVITT: Exactly. Yeah. So, I’m — here, I fall into my exploit trap. But yeah.

AXELROD: It’s really hard to avoid. It’s really very hard to avoid. 

LEVITT: I cannot get out of my head this context of a lifetime of viewing interactions as being competitive and exploitive. And so, I make the mistake over and over of not talking about this with the right language. It’s just one more good point that comes out of the research that you’ve done. 

LEVITT: So, tell us the results of the second tournament. Were they radically different than in the first tournament?

AXELROD: I nearly fell off my chair when I saw it because tit-for-tat won again. And wow — this was one of the high points of my research career when I added up the scores and I thought, I’m on to something here. This is not just a fluke. This is worth looking into and finding out how that happened.

You’re listening to People I (Mostly) Admire with Steve Levitt and his conversation with Robert Axelrod. After this break, they’ll return to talk about real-world applications of the prisoner’s dilemma. 

*      *      *

LEVITT: Morgan, what do we have on tap today?

LEVEY: Hey Steve. So, the U.K. has recently begun trials where they deliberately infect people with Covid in order to study the disease. And we had a couple of listeners, Brian B. and Terrell W. write in to tell us about this since you’ve been a big proponent of human challenge trials, which is what these studies are called. You’ve talked about it on the show a couple of times, you and Dr. Moncef Slaoui, the former head of Operation Warp Speed had a disagreement about their use for Covid-vaccine development. While Dr. Bapu Jena, who is an economist and also a host on the Freakonomics Radio Network actually agreed with you about their potential. So how do you feel now that they’re happening in the U.K.?

LEVITT: Well, I think it’s great. 

LEVEY: Do you know if they’re using an incentive to get people to sign up?

LEVITT: So they are paying these young volunteers about $8,000 a person, which is not a lot, but it’s so much less than they could. What I find so frustrating about it is, look there’ve been 4.5 million deaths from Covid and 200 million infections and yet the medical ethicists are up in arms just because the infections are being done intentionally. When the trade-off is that if we had learned from the beginning about how the disease spread, or maybe about immunity or getting the vaccines out sooner, it could have saved 10,000 lives, a hundred thousand lives, a million lives. 

LEVEY: So, going along the Covid theme, a couple of our listeners, Emily and Sean wrote in to ask if you knew anything about the effectiveness of Covid-vaccine lotteries. You talked about vaccine lotteries in our episode with Dambisa Moyo since several states tried lotteries as incentives to vaccinate their populations over the summer.

LEVITT: So sadly, Morgan, you know how big an advocate I’ve been for these lotteries, but the data suggests they haven’t worked very well. Now the one exception was the first one, the Ohio lottery. Because it was first, it actually got an enormous amount of free publicity from the media. And it worked really well. The estimates I did just looking at the data myself, suggested that maybe 60,000 extra people got vaccinated. That’s about $50 per person on the margin for each extra vaccination. And that is such a bargain. My estimates are, again, very back of the envelope but maybe every extra vaccination has an externality — a benefit to society — of about $10,000. So, the Ohio lottery was a big success. All the states that followed? Zero evidence that they had any impact at all on vaccinations. 

LEVEY: Do you have any other ideas for incentivizing people to get the Covid vaccine?

LEVITT: I’ll tell you listeners wrote in a couple of them with what is really an obvious, but I think a really great idea. They just said, “Why don’t the insurers refuse to pay the hospital costs of patients who get Covid if they’re not vaccinated?” And honestly, it’s super simple and maybe it’s completely politically unviable. Maybe it’s even illegal. I’m not sure, but if you were willing to make some people mad, that is the kind of program that would dramatically impact vaccination rates in the U.S.

LEVEY: Emily, Sean, Brian, and Terrell. Thanks so much for writing in. If you have a question for us, our email is pima@freakonomics.com.  It’s an acronym for our show. Steve and I both read every email that’s sent. So, we look forward to reading yours. Thanks.

*      *      *

LEVITT: I hadn’t expected that it would take so long to talk about the tournaments themselves, which is frustrating because the part that’s actually most interesting to me is the application of prisoner’s-dilemma logic in the real world. So, I’m excited to finally get to that now. And if we have time left over, I also want to ask Robert about the intriguing work he’s been doing in two very different areas, cancer, and cybersecurity. 

LEVITT: I suspect you’ve come across some pretty interesting real-world applications of the prisoner’s dilemma and tit-for-tat. Can you tell us about some of those?

AXELROD: I came across a book review in a sociology journal of something called the “Live and Let Live System” in trench warfare in World War I. And the idea was that the artillery would shoot between the other side’s first and second trench lines. In other words, they would deliberately say, “I’m not going to cause any damage, and you can see I can be accurate about it.” And hope that the other side would catch on and they would do the same thing. And so, they were basically cooperating with each other, which is very much of course tit-for-tat because then if the other side defected then they could defect too. What was different about trench warfare from most warfare that’s more mobile is that the same small units would be facing each other for long periods of time. And so, it demonstrated that even in the context of brutal war, they could develop this live and let live system.

LEVITT: That’s really fascinating because you think of war as being maybe the one case where there’s no room for cooperation. 

AXELROD: And I thought the trench warfare case would be very helpful in explaining and illustrating what the prisoner’s dilemma was, and what the results might mean. And that’s true, but even more important — people found it very compelling. In other words, the whole thing came more believable when I had this example. 

LEVITT: So, people loved the results. And Richard Dawkins, who’s the author of The Selfish Gene, wrote one of the most over-the-top prefaces I’ve ever seen, for a later edition of the book. That must’ve felt really good. 

AXELROD: Well, he says it should replace the Gideon Bible.

LEVITT: Yeah, exactly. So, he definitely did not withhold praise. But I got to say, I was surprised by Richard Dawkins’ love affair with the research because I think that your findings fundamentally challenged the notion that selfish genes will thrive, but I have to explain why because it’s not completely obvious why I’d say that. So, the way that you generate cooperation in the prisoner’s dilemma is by playing the game over and over so that future punishments rein in the strong desire of selfish players to act selfishly. But isn’t it true that you also can generate cooperation if players aren’t totally selfish, but rather are altruistic towards the other players? They get a little bit of joy when the other players do well. That’s a very different, but an equally plausible mechanism for generating the kind of cooperation in your game. 

AXELROD: No, I don’t think it’s equally plausible in a biological context or almost any — because if you’re just altruistic and you get a kick out of helping others without regard to how you’re doing, then you’re going to be taken advantage of all the time.

LEVITT: Okay. But I’m not saying without regard to what I’m doing. I’m saying I’m exactly like the selfish player. It’s just that instead of putting zero weight on the other guy, I put a little bit of weight on the other guy. And that just makes it easier to support cooperation. Because when I think about my own life, I think that’s a really good description of myself. I’m not completely and totally selfish. When I interact with strangers, for a variety of complicated reasons around identity and wanting to feel like I’m a good person or whatever, I act as if I care a little bit about them. I’ll be nice to them, even if I don’t think I’m going to play the game with them over and over. 

AXELROD: I agree that people are like that to some degree. And what I find most interesting about it is that they are more altruistic or more cooperating toward people who are similar to them. A version of this is ethnocentrism. We tend to help out and be trusting and accepting of people who are in our ethnic group and not outsiders. Another basis of cooperation that’s been well studied by biologists is kinship. And that’s what The Selfish Gene is referring to, which is if I’m closely related to somebody, then our genes have this shared interest in helping our genes survive and thrive.

LEVITT: Exactly. So, I think Richard Dawkins saw your work through the lens of that idea of cooperation based on genetic similarity. But I still have this instinctive feeling that for human behavior, there’s some altruism lurking around that’s not based on kinship, looking at it from a societal perspective, we definitely would like to socialize people into being nice if the world is characterized by a bunch of prisoner’s dilemmas being played all the time. Because if you can get people to be nice, the overall societal payoff and the individual payoffs will be higher. Do you agree with that, or do you think I’m missing the point of what you’re doing? 

AXELROD: No, I think it’s a supplementary point. It’s not contradictory to the value of strict reciprocity. And I share with you the feeling that lots of people, especially good people, get some kick out of helping others. 

LEVITT: One thing that I know I don’t do given the results of your tournament and the success of tit-for-tat is that I am not nearly so forgiving as tit-for-tat. When somebody does something wrong to me once, I dwell on that, and I don’t forgive them for a while. If somebody does something wrong to me over and over and over, and then suddenly changes their behavior, boy, it takes a lot of work to get me back on board. So, that’s probably where I could do better in life. Do you try to stress forgiveness in your own life? 

AXELROD: Yes. Because if you point out what you regard as defection from the other side, I think it helps them identify what might lead to trouble in the future between you. And then, I do try to be forgiving. One thing is to try to appreciate the other side’s perspective about why they did it. And maybe it wasn’t because they’re trying to take advantage of it. Maybe they had some other interest or goal in mind entirely. And that’s just one of the problems with cross-cultural interactions is that we often have trouble understanding what would be regarded as an insult to the other side. And they have trouble understanding what we take to be an insult. One of the other findings of the tournament is that you could do really well without ever doing better than the player you’re playing with. In fact, tit-for-tat cannot possibly do better than the other player because it never defects until the other side does and never defects more than the other side does. And so, this is really counterintuitive. You can win a tournament without ever doing better than the player you’re playing with. And that’s another illustration of how in zero-sum thinking, that just doesn’t make sense at all. But it makes sense in this context because what works well is to elicit cooperation, not necessarily to hurt the other guy.

LEVITT: That’s a great point. One of the things I really like about your research, both the design of it and the way you analyze it, is even after I think I understand what’s going on, you offer those little gems like you just did there that, “Hey, tit-for-tat never outperforms the other player in any individual contest. It’s just, over a whole bunch of contests, tit-for-tat and the other player that’s playing against tit-for-tat all do well.” And I love that. That’s the kind of thing that in a little tiny way changes my thinking about the world, which I think is the best thing you can say about a research project is that it changes the way you think about the world. 

AXELROD: It’s obvious after you say it that you could do really well without outdoing the other guy, but it’s not obvious before you say it. And in fact, it even seems nuts. 

LEVITT: So, we’ve talked a lot about the evolution of cooperation. You’ve also looked at cancer. Tell me about your research on cancer. 

AXELROD: Well, a colleague of mine showed me a computer simulation of a growing tumor and I was fascinated by it. And the visuals allowed you to rotate the tumor and see it from all sides, and it showed which genes were active and so on. And I said, “Well, I do agent-based modeling,” which that was. The cells were agents that interact with each other according to specific rules. You do a simulation. And she pointed me to a famous article called “The Hallmarks of Cancer,” which basically said that what cancer does is it overcomes the defenses of the host. It overcomes, for example, the ability to control growth. So, the host has this mechanism to control the growth, so no cells get totally out of hand. What a tumor does is it has a high rate of mutation, and a cell line eventually finds ways of overcoming each of the host’s defenses. And, what this led me to think was that it doesn’t have to be a single cell line. And an analogy I came up with years later, I think it makes the point which is if two thieves are trying to rob a house, and one knows how to turn the alarm off, and the other one knows how to pick a lock, they don’t have to both know how to overcome all the defenses. As long as they travel together, and collectively they can overcome the defenses, that’s sufficient. So, that’s what I thought was going on perhaps, that there was cooperation here, that cells near each other were doing different things to overcome the host. And my brother turns out to be an oncologist, and he helped me look into this. And he found eventually that it was new, that people hadn’t said that before. And so, I worked with him, David Axelrod, and with an oncologist, Ken Pienta, to develop this idea that you can have cooperation within a tumor. We got two interesting reviews when we submitted this speculation, one which said that what we were proposing was impossible. And the other reviewer said what we’re proposing everybody knows already. 

LEVITT: We certainly know those both aren’t true. So, have people taken that seriously at all? 

AXELROD: When we explained ourselves better, we were able to get it published. And, yes, in fact, people have found that you could have two cell lines that each don’t do very well, but you put them together and they could do very well. And so, you’re basically getting this lab demonstration that you can have this cooperation among tumor cells. And this led me to do a lot more work with Ken Pienta over the years on cancer.

LEVITT: You’ve done some cyber stuff too. What’s caught your attention within the cyber world? 

AXELROD: I’ve long been interested in computer science and international security. As cyber weapons became feasible and powerful, I realized that they could be very destabilizing. They’re very different from, say, nuclear weapons, which are stable, because a nuclear weapon, if it’s used, you know it’s been used. And you know who did it usually. And the survivor — the victim can always strike back. The major countries all have secure second strikes. So, there’s no incentive to go first. But with cyber, it could be quite different. You might be able to destroy the other side’s command and control system, in which case they can’t strike back. And so, you might have this reciprocal fear of surprise attack. And then, I heard about a roundtable involving about six, seven different countries, including the United States, Russia, and China, on the theme of military aspects of cyber stability, which is just what I was concerned about. And I’ve been part of that for about five years. It has been somewhat helpful in getting us to understand each other’s language. What do they mean when they say they’ve been attacked? What counts as an attack? Some understanding of what their sensitivities are. For example, if the United States supported the secessionist movement in China, we might regard that as just promoting free speech, and they might regard it as a threat to the regime, even though that would be almost paranoid. But this particular roundtable included some government employees —professors at the national defense universities, which are in charge of training the people that are going to become the most high-ranking military, and they’re also typically involved in the development of doctrine. And that’s important in cyber. 

LEVITT: It’s interesting. You are actually sitting down at the table with simultaneous translation and talking with Chinese academics and policy makers about cyberattacks. I’m surprised that kind of dialogue actually exists. 

AXELROD: Well, it does. And it’s in part because there’s a mutual recognition that there could be misunderstandings, misperceptions. And that’s especially dangerous in the cyber world where you might feel that you’ve been attacked, but you haven’t been. It was just a power outage. and therefore, there is a desire to avoid unnecessary conflict. 

LEVITT: So, Robert, one of my past guests was a guy named Yul Kwon, who was a winner on the T.V. show Survivor. And when I asked him what his strategic approach was, he immediately went to tit-for-tat and referenced your work. Did you know that you have been responsible for a victory on Survivor

AXELROD: No, but I’ll give you a nice example. At a political science convention, a colleague came up to me and said, “Your book really helped with my divorce.” And I said, “Well, I hope it saved your marriage.” And she said, “No, no, I didn’t want to save my marriage. It helped with the settlement. I realized I’d been a sucker all this time, and I didn’t have to be. And so, your book was an inspiration — I got a much better settlement than I otherwise would.” 

LEVITT: That’s hilarious. People must come up to you all the time. Are there other examples of where people have come up to you and said how your books changed their life? 

AXELROD: One example was a soldier from Iraq, who said that he realized that you shouldn’t be the first to defect. And often, they would be approaching a village, and they didn’t know if the village was hostile or not. And the villagers didn’t know whether the soldiers — how hostile they might be. And so, what he actually did is he said, “I want my soldiers to put their rifle behind their neck as we walk toward this village.” They could see then that we’re not intending to start any trouble. And maybe they won’t. And if it doesn’t work well, we’ll obviously have our rifles at hand, but I thought that was really quite a striking application. 

LEVITT: I’m not sure whether I should be embarrassed that the moral code that guides my daily life is essentially tit-for-tat. I have read a lot of great philosophers and my fair share of self-help books, but the truth is tit-for-tat is one that speaks to me. My first life principle is to be nice. My second life principle is to be provokable. When someone tries to take advantage of me, I’m willing to fight. My third life principle is to be forgiving. I believe in second and third chances, although I still have some work to do to reach the unlimited forgiveness embodied by tit-for-tat. All in all, these principles seem like a pretty good framework to build a life — not bad for a strategy that only requires two lines of code.

Hey, one last thing, I’m part of an organization called datascience4everyone.org and we’re trying to crowdsource some great ideas about how to build data science into the math and science curriculum at the K-12 level. So we’re running a little contest. If you’re a teacher or just someone with great ideas, we want to hear them. Our website is datascience4everyone.org. Where “for” is the number 4. Datascience4everyone.org and you can see the contest there. And spread the word. We’re trying to gather as many great ideas as possible. Thanks again.

*      *      *

People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and Freakonomics M.D. This show is produced by Stitcher and Renbud Radio. Morgan Levey is our producer and Jasmin Klinger is our engineer. Our staff also includes Alison Craiglow, Greg Rippin, Joel Meyer, Tricia Bobeda, Emma Tyrrell, Lyric Bowditch, Jacob Clemente, and Stephen Dubner. Theme music composed by Luis Guerra. To listen ad-free, subscribe to Stitcher Premium. We can be reached at pima@freakonomics.com. Thanks for listening.

AXELROD: Wait a minute. I thought he asked me to withhold his name because it was so dumb. 

 

Read full Transcript

Sources

  • Robert Axelrod, professor of political science and public policy at the University of Michigan.

Resources

Extras

Episode Video

Comments