Search the Site

Episode Transcript

My guest today, William MacAskill, is an associate professor of philosophy at Oxford University. He’s been a pioneer in the effective altruism movement. And his most recent work argues that humanity should be thinking long term as we make choices today. What’s long term in MacAskill view? A hundred years? Nope. He means a thousand years, a hundred thousand years, or even a 1-million-year perspective.

MACASKILL: If I can prevent a genocide for certain in 1,000 years’ time or 11,000 years’ time, well, both equally bad if they’re going to inflict the same amount of harm. The mere location and time does not matter.

*          *          *

Welcome to People I (Mostly) Admire, with Steve Levitt.

*          *          *

I’ve had more than 80 guests on this podcast, but not a single philosopher. And that’s not entirely by accident. I generally don’t have much in common with philosophers. They tend to focus on big, complicated questions. I like little questions that I can wrap my head around. They view the world through a lens of morality, whereas I’m much more comfortable thinking like an economist where the focus is efficiency. And they often use big words that I only half know the meaning of. But from what I know about Will MacAskill, he’s not that kind of philosopher. So, let’s see how it goes.

LEVITT: So, Will, do you remember how I came to blurb your book Doing Good Better?

MACASKILL: I cold emailed you. And you responded saying, “I never blurb books that have been cold emailed to me, but I took a look at yours, and it was so exceptional that I had to do it.” And I wanted that as the blurb, actually.

LEVITT: Well, I was not lying, because it is so rare for me to blurb people’s books. That was your first book for a general audience. And it was in early exposition of what’s called effective altruism. For the uninitiated, can you explain what you mean by the phrase, “effective altruism”?

MACASKILL: Effective altruism is about using your time and money as effectively as possible to make the world a better place. So, if you’re donating to charity, what are the most effective charities — the ones that take your dollar and do as much good as possible with it? Or if you’re deciding what career should you pursue, what are the things you should think about so that you can pursue a career that has the biggest positive social impact? And we provide a lot of advice on our websites and in person, focusing in particular on donations and career choice.

LEVITT: And there are a set of principles that help guide people’s thinking about how to have the most impact. Can you go into those a little bit?

MACASKILL: Well, one principle is cause prioritization. So, thinking of all the problems that the world faces, what are the ones where I can actually, on average, have the biggest impact? The second that are very neglected — those that aren’t already getting an enormous amount of attention. And third, those that are tractable, where, like, additional effort can actually make a really big difference.

LEVITT: Now, when you say those things, you sound very much like an economist. You’re saying you want to take the neglected problems because those are the ones that aren’t already into the decreasing part of the investments, and you want to take the ones that are malleable, ones where an extra dollar will really have a big impact. So, you’re not an economist. You’re a philosopher. Is it accidental that you sound like an economist when you talk about altruism?

MACASKILL: I think it’s very non-accidental. So, I see effective altruism as a happy synergy between moral philosophy and economics. Economics is very good at analyzing optimality. So, you have a certain amount of resources, within the economics world, there’s a question of like, how can you make that as efficient as possible? Where perhaps the way you are measuring efficiency is in terms of total dollars generated. Moral philosophy is interested in this question of, well, what’s actually of value? So, if you take both of those things together, this more philosophical understanding of really what’s of value, but combined with the economist’s toolkit of how to think about being a rational actor, maximizing some value function, then you get something pretty close to effective altruism.

LEVITT: So, a set of institutions have arisen around effective altruism. Tell me a little about those.

MACASKILL: So, GiveWell is an organization that’s making recommendations to donors who want to do as much good as possible with their giving and in particular, focused on causes and nonprofits that have a lot of evidence behind them where the impact can be comparatively easily measured. And so, they tend to focus on global health and development, where there are some interventions, such as distributing long-lasting insecticide-treated bed nets, where we can have really pretty reasonable levels of confidence that we’re doing a very substantial amount of good. Something like $5,000 will save a child’s life.

LEVITT: It’s a little bit intimidating because it’s not easy to figure out whether a particular charity is doing something important.

MACASKILL: That’s exactly right. So, use the analogy of investments. You might want to save some of your money. You might want to invest some of it. Taking the approach of just yourself trying to figure out, what are the most undervalued companies on the stock market and putting money into them? We have a lot of data, actually. That’s a very bad plan if you’re just a kind of day-trader. Instead, the right thing to do is to get expert advice.

LEVITT: No, wait, you’ve got to be careful because, you might not know it, but I am so against financial planners and financial advisors.

MACASKILL: What you could do is put it into an index fund.

LEVITT: Ah, that’s more like it. And it’s actually a really important difference because there’s no equivalent to just putting it into an index fund because unlike the stock market, which we think is pretty efficient, the market for charities is clearly very inefficient, right? And it’s exactly the fact that it isn’t a market like a stock market that makes having experts who can point you in the right direction so incredibly valuable in this space compared to in an investing space.

MACASKILL: Absolutely. There’s no equivalent at all within the charitable world. We sometimes use giving to GiveDirectly, which is an organization that simply transfers cash to the very poorest people in the world — oh, that’s like the index fund of giving. But it’s not really in any structural way like the index funding of giving. That’s just meant to be suggesting, simply giving the poorest people in the world cash is like a baseline. Can we do better than that?

LEVITT: So, sticking with the idea of thinking like an economist, one of the things that you’ve talked about before, I think you call it “earning to give,” so the idea that if you want to do good, one thing you can do with your career is to toil in a nonprofit that’s trying to address some cause. But the other thing you can do is go to Wall Street and try to earn a whole lot of money and then give that money away. Now that makes total sense to an economist, but I suspect that a lot of people in the altruism space were pretty angry about you saying that.

MACASKILL: Yeah, early days people certainly had a lot of queries about it, we can say. The underlying argument is just for many people, what’s their comparative advantage? The thing they’re going to be best at is not working directly for the nonprofit perhaps, but doing something that is judged to have lower social value, but where you’re able to just earn a lot. And then perhaps you can fund many more people who are actually more qualified than you are to be doing direct nonprofit work rather than you. It really varies depending on the cause, how much a cause is constrained by funding versus constrained by labor, and by people who actually want to go in there. But this is a path that I think at least seriously should be considered.

LEVITT: Now, my own experience is once you put yourself in an environment like investment banking, it’s really hard to stay true to whatever fundamental beliefs you had when you started. And I guess that’s an empirical question. It would be an interesting study to look into — to see how often people who say they’re going to do good end up doing good and how many get sucked down the vortex.

MACASKILL: I think there’s a small amount of data from effective altruists pursuing this path. And I think they might be a very different sample than the normal person who works in investment banking and claims they’re going to give money away. I’m not sure how truly sincere those people were at the time. And this is certainly a worry I had. I called it the “corruption worry.” But it seems to be not holding up, at least for the people who are really engaged in effective altruism. Those people I know, the highest earners are generally giving much more than 50 percent. The kind of biggest success story for earning to give is Sam Bankman-Fried, who’s now the richest person in the world under the age of 35, or at least that was true recently. And he’s publicly stated he’s giving away 99 percent of his wealth or more, and is already ramping up his giving.

LEVITT: And he’s a crypto guy?

MACASKILL: Yeah, he co-founded a crypto exchange called F.T.X. I think part of what helps here is that we’ve just built this community. It’s much easier to live up to your ideals if you’ve got a bunch of people around you who will praise you for living up to them, such as giving. And maybe you’ll feel shunned or less welcome if you’re claiming that you want to do enormous amounts of good and give enormous amounts, but actually aren’t. So, that’s the kind of incentives, a count of perhaps how we’ve done better than the base rate of people working in investment banking.

LEVITT: It seems hard to argue against effective altruism. But let’s talk about what one needs to believe about the world to conclude that effective altruism is the best strategy for an individual to follow. And the first one is that a dollar is worth a lot more to a really poor person than to a rich person.

MACASKILL: I think it’s absolutely true. If you look at the literature on the relationship between happiness and income, you find that money does make people happier in general. But the returns diminish very quickly.

LEVITT: And the second assumption is that the world is full of poor people. Globally, an annual income of even $10,000 or $15,000 makes you rich, which is surprising. But given that’s true, you don’t have to be a rich person in the United States or Europe to be rich in the world and thus be able to transfer income to the poor and have a big impact.

MACASKILL: That’s absolutely right. So, there are about 800 million people alive today who live on less than $2 per day, where, to be clear, what that means is what $2 could buy in the United States — it’s already adjusted for the fact that money goes further overseas. So, in financial terms, someone in a typical income in a rich country is about 100-times richer than someone living in extreme poverty. And that means that you can benefit them, increase their wellbeing by a factor of about 100, compared to how much you can increase your own wellbeing. And in Doing Good Better, I give this idea of the 100-fold multiplier. Imagine you buy a beer, but you could either buy a beer for yourself or you could buy a beer for 100 other people. Well, that would seem like a pretty good deal to do that for 100 other people. And that’s exactly the situation we’re in.

LEVITT: The third assumption that would underlie effective altruism is that the purpose of giving is to help others as much as possible. So, the idea that I’m giving away money because I want to help other people, but it sure seems to me that for many people, virtue signaling, receiving the praise of friends, being thanked by the beneficiaries — you even mentioned yourself that it’s easier to support effective altruism because there’s a community of other effective altruists who praise you and bully you if you do the right thing or do the wrong thing. And so, all of those private returns that people get from giving — I think that’s almost unrelated to maximizing good. And so, I think many people would make sense not to maximize their giving, but instead to give in ways that lead to the most positive reinforcement around their behavior. So, how do you respond to that?

MACASKILL: Well, I think it’s true that much giving, perhaps most giving, is not driven by a desire to do as much good as possible. Instead, like you say, it can be for symbolic reasons or, more cynically, to look good in front of your friends. Or it can be because you want to give back, perhaps there’s been a school or a university that you’ve been part of that you want to kind of reward. Or you might do it as part of a religious commitment. These are all very different than the desire to do as much good as possible. But one thing that we’ve learned over the last 12 years of promoting effective altruism is that many people actually do have this desire. And one thing that’s been interesting for me as a moral philosopher is how often people just have that desire, even if they think they’re not morally required to. Many people in effective altruism are nihilists. They think that nothing matters, ultimately, but they just want to do good. But then there’s a second aspect, which is, ought people to want to help others by as much as possible? And I’m sure you’re familiar with Peter Singer’s thought experiment of, if you’re walking past a shallow pond and you see a child drowning in that shallow pond — and let’s say you’re on the way to a job interview, and you’re wearing this very expensive suit. Maybe it’s even worth thousands of dollars. And if you were to then see that child whose life you could easily save and think, no, I’m not going to ruin my suit. It’s too nice. And you just walk on by. Well, us moral philosophers have a technical term for someone like that. A**hole. It’s actually morally wrong to be walking past the child in that way because the loss of a few thousand dollars — that’s the price of an expensive suit — pales in comparison to the loss of a life of a child. But if you think that about a child drowning in a shallow pond right in front of you, well, what’s the difference between that and the child dying of malaria in Sub-Saharan Africa? And there have been many decades of work of the moral philosophers trying to escape that conclusion. And I think they have not succeeded.

LEVITT: I can already hear the chorus of boos from the audience as I follow this track, but I really wonder whether maybe there’s a combination of evolutionary forces and psychological forces that are pushing strongly against people caring about strangers. So, let’s just start with evolution. I suspect that over millions of years of evolution, far predating the emergence of humans, there’s been an encouragement of natural selection for creatures that care deeply about their own wellbeing and not very much about others. And clearly modern society tries to instill kindness and empathy into people, but I suspect selfishness is just a really powerful force. Do you think that’s a silly argument?

MACASKILL: I don’t think that’s a silly argument at all. I also agree that people, most of the time, are pursuing their own self-interest at least if that’s fairly broadly concerned, which includes the interests of their family and so on, but not all the time. And in an important sense, we’re not merely a species that’s following evolutionary pressures. We’re a cultural species. We’re able to respond to arguments and reasons about what we ought to do. And that means you just can go out there and say, “Look, we ought to be doing something different.” I think this was true for the abolition of slavery. It was not in the self-interest of the slave owners or the British empire. It was not in the self-interest of male voters to give women the vote, but people can be persuaded on the basis of reasoned argument. And the rise of effective altruism is just one instance of that, certainly not the only, where people can just be convinced to do something that’s bigger than themselves.

LEVITT: Yeah, no doubt these social movements have tremendous impact. Let me toss out a thought experiment. So, imagine that — for certain, that in return for you dying, you, Will, dying tomorrow, 100 strangers who otherwise would die tomorrow would instead live full, normal lives. Now you would be the only one to know what your sacrifice accomplished. Your friends and families won’t know. And the 100 lives you’ve saved, they won’t have any idea it was due to your generosity. So, do you choose to live or die?

MACASKILL: So, I would choose to live, certainly. And the thing I think this illustrates is that in the world today, the level of sacrifice required to do enormous amounts of good, depending on who exactly you are and your opportunities, potentially much more than hundreds of lives, are actually just like, on the scale of things, comparatively small amounts of sacrifice.

LEVITT: I’m actually shocked that you would say that you would live. Even if you were just pretending — you have a reputation. You’re an effective altruist. I am shocked that you so quickly said, “I would live.”

MACASKILL: Firstly, you asked me what I would do, not what I should do. I am in no way claiming to be a morally perfect agent. In fact, I’m 100-percent certain that I’m not. The key thing I want to communicate is you can do enormous amounts of good for actually just very small amounts of sacrifice, maybe large financial sacrifices. I give away most of my income, but that really doesn’t change my wellbeing, maybe a little bit, but not all that much. Then, the philosophical question. So, if you ask what should you do, then it gets much more tricky. I think there are these strong arguments for impartiality, where it’s at least permissible to treat your own wellbeing in the same way as the wellbeing of a stranger who you’ll never meet. I do think that when you’re making a moral decision, you should try to have a number of different moral perspectives in your mind that you give some weight to and act on the synthesis of those, like the best compromise. That was my second book, but it was for academics, so it’s never going to get read quite as widely as Doing Good Better. And I think in some moral views, you have these special relationships to your friends, to your family members, perhaps to yourself as well. And that means you just should give their interest more weight. And that’s a very natural view. Perhaps you think it’s debunked, merely the product of evolution, but at least there’s strong moral intuitions there.

LEVITT: So, forget about evolution now. Let’s talk about psychology. And I suspect there are also psychological reasons not to care too much about other people. There’s just so much suffering in the world. And if we open our hearts to it, it has the effect of making us miserable. We suffer infinitely if we put even a little bit of weight on the heartbreak of Ukrainian or Syrian children caught in war zones.

MACASKILL: So, it relates to scope insensitivity, “A single death is a tragedy, a million deaths is a statistic.” Quote from Stalin. The truth is that if you have a single victim that is identifiable. You can empathize with that person. It like really moves you. For a million deaths, you don’t get the same kind of emotional arousal. And, in experiments, people are much more likely to give, to benefit a single individual than they are when just given kind of facts and figures. And it is true that the amount of suffering in the world is just truly enormous. And that’s why if you look at a doctor, certainly a doctor in a conflict zone, they develop these kind of somewhat defense mechanisms. Doctors can often be a little bit detached. You just can’t empathize with every single patient you’re treating. Otherwise, you’d just be like broken down in tears. And I think we need the same sort of attitude if we’re trying to do good, in these other ways, too. Because if you’re just really empathizing with all the suffering in the world, the only appropriate reaction would be to vomit or to break down in tears or to scream or something. And none of those things are very productive. And so, you need to be able to do this combined activity of having empathy, but then channeling it in the most productive ways. And that can be hard.

You’re listening to People I (Mostly) Admire with Steve Levitt and his conversation with philosopher Will MacAskill. After this short break, they’ll return to talk about Will’s new book What We Owe the Future.

*          *          *

LEVEY: Hey, Steve. So, our listener, Emily, had a question about why we teach so much math with economics in college. Is there a way to study economics as a social science away from the complicated calculus and theoretical math? And if not, should there be?

LEVITT: So, Emily and I are kindred spirits, because I really believe that economics is made up of two components. There are a central core of amazingly powerful ideas about how to understand the world, which are not really mathematical at all. And that’s the part of economics that I love. And then there’s a bunch of math and technicality, which is important to understanding when those ideas are true. And when they’re not true and working out the answers to particular problems. So, I’ve been befuddled by the amount of math that we put into our undergraduate programs and I’ve actually tried to do something about it. So, with my colleague John List, we decided to introduce a new course at the University of Chicago five or six years ago. And it was called Economics for Everyone. And it was economics without math. Just the big ideas in economics. And we wondered whether there’d be any demand for the course. And it turned out that the first time we taught it, there was something like 800 students in the course. And indeed, there’s a whole initiative now in the department of economics at the University of Chicago, to think about how we can take the ideas in economics, separate from the math, and bring it to the general public. Back in the 1970s, Milton Friedman went on P.B.S. and he had a show that was called “Free to Choose,” and it was about Chicago economics and the ideas of economics without math. And it was pretty popular and had a big impact. T.V.’s gotten a lot better though in the last 50 years. And so, I think it’s a lot harder to penetrate the public psyche with economics.

LEVEY: Steve, can you give us a little taste of your Economics for Everyone course — a few themes that you cover in it?

LEVITT: You know, a great example is if you remember the episode we had on this show with Harold Pollock, where he talked about how everything important about personal finance can be written on a three-by-five index card. So, essentially one of my lectures — and I talked about it in that episode of the podcast — was my own take on that separate from Harold’s sake. We came to the idea independently that you really can understand personal finance without spending a lot of money on an advisor and without using a lot of math. Just common sense ideas. So, I think the single best lecture I give in the course is an hour devoted to how you can, with common sense and a basic understanding of the ideas of economics, manage your own personal finance effectively and cheaply for an entire life. I’ll give you another example. The other thing that I talk about in that class, which is really basic, but I think profound — it’s just about money and the importance of money and how everything that we do is impossible, absent money. Because without money you’ve got barter and barter is just incredibly inefficient. And that’s a simple idea, but it’s one that I didn’t appreciate until I was about 30 years old, long after I’d taken college economics and even my Ph.D. in economics. And that’s the kind of thing that I think is actually useful to know. It doesn’t take any math at all, but it can stick with people long after the class is over.

LEVEY: Emily, great question. Thanks so much for writing. If you have a question, our email address is pima@freakonomics.com. That’s pima@freakonomics.com. It’s an acronym for our show. Steve and I read every email that’s sent, and we look forward to reading yours.

 *         *          *

I have to say, I’m still reeling from Will’s answer to my question, where he said he wouldn’t willingly die to save a hundred other lives. Whatever he would actually do in that situation, I was sure he would say he would choose death. This is a guy who not only writes about effective altruism — he devotes most of his income to it. This might sound weird, but my respect for him increased dramatically when he said he wouldn’t give up his own life to save a hundred others. I really admire him giving an honest answer to a hard question, even at the risk of looking selfish. Now, I want to move on to what I think is a far more controversial and radical topic than effective altruism — Will’s arguments that we should be taking the extreme long view in our decision making. I also want to see if he can explain in simple terms some of the cutting-edge problems that the field of philosophy is focused on these days.

LEVITT: So, your new book is entitled What We Owe the Future. And you make the case for what you call long-termism. So, first, what do you mean by long-termism?

MACASKILL: Long-termism is the view that positively influencing the long-term future is a key moral priority of our time. So, that means thinking not just about impact in the here and now, but how might our actions impact the coming centuries, millennia, or even millions or billions of years.

LEVITT: Usually when people talk about short-term versus long-term views, it’s, oh, managers at Fortune 500 companies are worrying about the next quarter instead of two or three years from now. But I want to just emphasize, you really mean long-term when you say long-term.

MACASKILL: I really mean long-term. And you’re right that long-term thinking, in the present day and age, that refers to maybe years, decades. Occasionally, we have forecasts of things out to 2100. That’s sometimes done for G.D.P. It’s done for population. But, here’s my argument: The first premise is just that future people matter morally. They are people just like you and me. They just don’t exist yet. If I can prevent a genocide for certain in 1,000 years’ time or 11,000 years’ time, well, both equally bad if they’re going to inflict the same amount of harm. The mere location and time does not matter.

LEVITT: You’re thinking about questions that I’ve never thought about and oftentimes coming up with answers that really surprise me. One argument you make is that relative to all the humans who are likely to ever live, we’re really early in the game. Can you describe your thought process on that one?

MACASKILL: Yeah. So, I actually think we’re possibly, at least, right at the beginning of history, and that future generations will see us as the ancients living at the very distant past, the very kind of cradle of civilization almost. And why is that? What are some different reference classes we could use? One is just, what’s a typical mammal species? What’s its lifespan? And that’s about a million years. How long has homo sapiens been around? About 300,000 years. So, on that kind of reference class, there’d be 700,000 years still to come. So, again, the vast majority of the future would still be ahead of us. But we’re obviously not a typical species. And that goes in two directions. Firstly, we are developing the power to annihilate ourselves. So, it could be that we go extinct in the next few centuries. Then, we’d have a much shorter life span than the typical mammal species. But also, if we navigate these risks, we could live for much longer again. So, what’s the lifespan of humanity if we stay on earth? At least 500 million years, probably close to a billion years, before the sun’s glowing luminosity sterilizes the earth. If we were then one day able to take to the stars, the sun will burn up and explode in about 8 billion years. So, if we manage to navigate the risks that we face in the coming centuries or coming thousands of years, the future just could be enormous. And I think we just haven’t fully grappled with that fact. We rarely take seriously the implication of the fact that we’re plausibly so early on in civilization.

LEVITT: Relative to the number of people who are alive at this moment, what’s the relative number of future humans that you think will someday exist?

MACASKILL: The very most conservative number would be something like 1,000 to one, but I think real numbers would be trillions to one, maybe trillions of trillions to one.

LEVITT: Well, those are big numbers. We live under this illusion of being way down the chain. But unless we manage to destroy humanity, it’s just not true. And I find it especially interesting because I had M.I.T. physicist Max Tegmark on the podcast a while back. And he kept on saying things like, “The human species is only in its adolescence.” And I didn’t really truly understand what he meant until I started to read your book. And I really got the arguments laid out with data about how, very plausibly, we are so early in the span of what human life will be.

MACASKILL: Exactly. And humanity is in its adolescence, or I tend to say teenage years, in another way as well, where we are still maturing. And certain things that we do over the course of our lifetimes could drastically affect the entire trajectory of the rest of the human lifespan. When I was a teenager, I was making these big decisions: What do I study at university? How does that shape the longer run career? What sort of values do I want to live by? I was also being very reckless. I was engaging in urban climbing and nearly killed myself doing that. In these cases, well, I was just coming to terms with the fact that I could make decisions over my own life. And if we look at what the most important aspects or the most important decisions were, well, they were the ways in which those decisions would impact not just the weekend I was considering at the time, but the entire course of my life. And so, that’s also true, I think, for humanity, where we are in this state of what I think must be historically unprecedented rates of technological advancement and change. And that’s creating these enormous threats that could wipe us out in just the same way that I nearly killed myself climbing buildings as a teenager, where risks for man-made pandemics are the No. 1 kind of world event. Or secondly, we could lock ourselves into some bad state. I think advances in technology could mean that if there were a single global totalitarian state, it really could persist forever with technology that’s I think not very far away.

LEVITT: So, another thing you write about, which I found shocking, relates to economic growth. So, we’ve gotten so used to economic growth over the last few 100 years that it seems like the natural state of things. Every time a doomsday predictor comes out saying the end of economic growth is around the corner, they’re always wrong. But over the time horizons you’re considering — thousands of years — the math associated with compounding makes it unlikely, impossible, that economic growth can actually persist. Essentially, the argument you make, which I’ve never really heard it before — is that we likely find ourselves right now in a very historically-narrow window where economic growth is the norm.

MACASKILL: Absolutely. Currently, economic growth is like 2 percent, 3 percent per year. What happens if that continues for just 10,000 more years? Well, 2 percent compounded over such a long time gets to a very large number indeed where it would mean that for every atom within 10,000 light years — so, every accessible atom within that time would have to produce the economic output of some enormously large number. I think it’s something like 10 to the power of 60. But just think trillions times trillions times trillions of amounts of output as the entire world economy today. And now, I’m not claiming I’m certain that’s impossible. The world today is magical and fantastic and would be judged as such from the perspective of people 1,000 years ago. But it seems really unlikely that every single atom is able to produce many trillion times the economic output of the world today. And we can’t do it just by having more stuff. It’s not like we can just be producing more loaves of bread and more steel. We very, very quickly run out of that. And then, also, just at some point, we’re going to just have discovered everything that we’re able to discover ever. Only over the course of a few hundred years, we’ve gone from having almost no understanding of physics to having a really pretty good understanding of at least the physics of kind of medium-sized objects. At some point, we’ll have figured it out. And then, we wouldn’t be able to get economic growth via that means either. And that suggests that when we’re considering timescales of many thousands of years, economic growth just has to slow down, and it has to plateau. And that has quite big implications.

LEVITT: And if you look backwards, too, for most of human existence, economic growth has not been the norm, right? You’re more or less expected to do exactly what your parents did for almost every generation that humans have ever existed. But it’s funny because, as an economist, I’ve been so indoctrinated into this idea that economic growth is natural, it’s good, it’s part of life, that just something should have been totally obvious to me, but wasn’t because of my indoctrination, all it took was a few lucid arguments from you.

MACASKILL: I should credit the argument comes originally from an economist, Robin Hanson. And Holden Karnofsky is another person in effective altruism who’s written about this. Another thing that you might’ve been indoctrinated into is that economic growth follows a kind of constant exponential of, again, about like 2 percent, 3 percent per year, around that. And that’s true if you look at, let’s say, the U.S. over the last 200 years. But on much larger historical time spans, it’s not true. Instead, it’s faster than exponential because growth rates used to be much, much lower. And for most of history, economic growth came via population growth, not via increased standards of living. And then, over the course of the Industrial Revolution, there was this somewhat slow, and then accelerating takeoff. And so, really the overall curve looks like an S-curve. It kind of ramps up faster than exponential, and then after that is going to start to plateau at some point in the future. I’m not claiming exactly when.

LEVITT: Now, because you’re focused on the long-term, you’ve thought a lot about the extinction or near extinction of the human species. So, what’s your educated guess as to the thing most likely to lead to our extinction in the next 100 years or 1,000 years?

MACASKILL: The thing that I think is most likely to lead to our extinction and where we aren’t replaced by other sorts of beings, such as artificial intelligence, is from man-made pandemics. Pandemics, historically, have been among the causes of among the largest death tolls. So, the Black Death killed something like 10 percent of the world’s population, though it’s hard to be precise about that. But future pandemics could be much, much worse. Because in the past we’ve only had natural pandemics, and future pandemics could be designed to have much, much greater destructive power. You could have the fatality of Ebola and the contagiousness of measles, for example. And in fact, there’s no reason, it seems to me, why you couldn’t start to create viruses that could kill almost everyone in the world, or maybe even everyone in the world.

LEVITT: You’re talking about somebody intentionally creating such a virus.

MACASKILL: Yes. And thankfully, I think such motivations are relatively thin on the ground. However, biological weapons are not very good weapons because it’s very hard to contain the damage to the opposing side. So, you might think that rational actors will not develop such biological weapons. However, just as it happens, they do. And the U.S.S.R. had by far and away the largest bioweapons program that employed about 60,000 people at its peak, and was undetected for decades. And was really in the business of trying to produce some of the nastiest things they could find. And there might well be reasons— like underlying rational reasons — for doing that. Perhaps you want a stockpile of doomsday viruses so that if you’re the recipient of a nuclear attack, you’ve got this incredible deterrent because, any sort of attack on you will, therefore, destroy the opposing side as well. But just the track record of biological weapons programs — and again, it’s not just the U.S.S.R., but also Japan, also the U.S., and then they called it off, should just make us worried that in the future the same will happen again. And there will be experiments and the development of really worst-case pathogens that could lead to worst-case pandemics.

LEVITT: What kind of probability would you put on these bioengineered pathogens leading to the extinction or near extinction of humanity in the next 100 years, 1,000 years?

MACASKILL: In the next 100 years, somewhere between 0.1 and 1 percent. So, if I have to pick a point estimate, I go for about 0.5 percent.

LEVITT: Okay. That’s high.

MACASKILL: Yeah. It’s high.

LEVITT: One in 200 chance that it’s going to be the end of humanity in the next 100 years, which isn’t very long.

MACASKILL: Yeah. And honestly, I’m lower than other people’s estimates. Many of the people who actually know more on this than me put it at 1 percent. So, I sometimes feel like I’m on the kind of more optimistic end. But honestly, any of these numbers — like, imagine you’re getting on a plane and the pilot says, “Hey, we’re actually going to be fine. There’s only a one in 1,000 chance of us crashing and dying.” It’s like not very reassuring. I gave this range of 0.1 to 1 percent. Even on — let’s take the lowest end of that range, 0.1 percent. Ensuring that’s not 0.1 percent, ensuring that it’s as close as possible to zero, should just be one of the big priorities of our time. Whereas at the moment, there’s almost no discussion of it.

LEVITT: Now, once you start thinking about near extinction, then the next logical question arises. Well, what happens after that? Do those 1 percent of the people who survive — do they thrive, or do they slide into a dark age? And are there key determinants that could push that one way or the other?

MACASKILL: My overall conclusion was relatively optimistic about our chances of civilization rebounding, even after such a horrific catastrophe that killed 99 or even 99.9 percent of the world’s population. I think it’s much more likely than not that we would rebound. A few different kind of pathways or reasons for thinking of this. One is that when we look at local catastrophes, society just tends to bounce back even from very bad catastrophes. So, the Black Death in Europe killed somewhere between a quarter and 60 percent of people in Europe. But yet, it’s not like the course of Europe after then was, like, permanently derailed. In fact, there are even some arguments that it helped accelerate the move to the Industrial Revolution.

LEVITT: And you look at places like Hiroshima, and it’s amazing how quickly Hiroshima and Nagasaki bounced back after the atomic bombs.

MACASKILL: Exactly, yeah. I was very ignorant before looking into this. And when I pictured Hiroshima, I imagined just like this wasteland, smoking ruins, even now. But I was just utterly, utterly wrong. The population of Hiroshima bounced back to its prewar levels in 13 years after the bombing. Electricity was restored to key areas within two days. Street cars were up within a matter of days or a matter of weeks. The Bank of Japan was operational again within the week. It seems that humans in response to catastrophes are just remarkably resilient and are able to respond in actually really pretty astonishing ways.

LEVITT: So, you started working on effective altruism really early in your career. You’re still only 35 years old, and you’ve been at it for a long time. What are your philosopher colleagues — what do they think about your work in this area? I suspect it probably aggravates them, no?

MACASKILL: I think it’s a big mix. Many philosophers have gotten on board and now work on topics that are of direct relevance to effective altruism. So, for example, work on population ethics or work on decision theory. How should you make decisions when you’re comparing a certainty of a certain benefit versus a lower probability of a larger benefit? Or just, is this long-term perspective — is that correct? Or instead, should we be focusing on here-and-now issues instead? And I helped to set up a whole research institute, Global Priorities Institute, here at Oxford in order to help promote work on these topics. At the same time, there are also people on the more skeptical end as well. I think often this is just misunderstandings actually. A lot of people see effective altruism as merely applied utilitarianism. Whereas really, I think just if you have any moral view that thinks that there is such a thing as the good and an idea of the world being better, and you think that more good is better, so other things being equal, you should do something that has like a better outcome rather than a worst outcome, then you’re already basically on board with, effective altruism. You might not think that it ought to be the whole of your life. And you might also think, look, there are constraints. The ends don’t always justify the means. But again, I’m not telling people to go out and kill someone to harvest their organs and save five others. That’s the sort of thing that we philosophers debate in a seminar room. But in the world, as it is today, you can just do enormous amounts of good without doing harm. Honestly, we just created the concept of effective altruism to be really pretty ecumenical across different moral viewpoints. And so, I see the most exciting work as a kind of within effective altruism debate of, okay, what is the way of doing the most good? Is it helping animals or humans or trying to mitigate these global catastrophic risks? Rather than, is it correct or not?

LEVITT: So, you’re a philosopher. You’re a professor of philosophy at Oxford. So, I can’t let you out of here without you exposing us to a little cutting-edge philosophical thinking. Now, I know there’s something called the repugnant conclusion that a lot of philosophers have been pondering. Is that a topic you can explain to non-philosophers?

MACASKILL: Absolutely. So, the underlying field of moral philosophy that this is within is population ethics, which is the ethics of creating new people. Just, can it be good to create a person with a happy life? Is the non-existence of some future person, if that person would be very happy, is that a moral loss in the same way a death as a moral loss? And once you get into that, you start comparing populations of different sizes. Is it a better world or a better future to have a small number of people? Or is it better to have a very large number of people with very badly off lives? Or not very badly off, they’ve got positive lives, but only just positive lives. Or is there something in the middle? I think intuitively we would say, look, we want the small number of extremely well-off lives. However, there are some arguments against this. So, start off with this small population. Let’s say it’s a billion people with extremely good lives, super wellbeing lives. And let’s represent that as like plus 1,000, let’s say. That’s their wellbeing.

LEVITT: Each individual is plus 1,000.

MACASKILL: Each individual. Now call that world A. And then, we’re going to change that a little bit. We’re going to move to what we’ll call world A-plus, which is where we take those billion lives and we make them even better again, just a little bit. So, now they are all at wellbeing plus 1,001. The second thing we do is we add another billion people, and they have really pretty good lives too, but not quite as good. So, we’ll say they’re plus 900 wellbeing. So, first question is, is this second world better than this first world?

LEVITT: Yeah. So, it seems a lot better, right? because you’ve made all those first people better off and everyone you’ve added is above zero. So, they have positive lives. So, it seems really hard to argue that A-plus is worse than A.

MACASKILL: Exactly. And that’s, I think, what everyone would think. Okay. Next, we’ll move from A-plus to let’s call it world B. So, you take all those same 2 billion people in world A-plus who were at plus 1,001 and plus 900. And what we’re going to do is just have everyone at the same level of wellbeing. So, this is like slightly — A-plus is a little bit unequal. So, we’re going to say it’s equal. And also, we’re going to improve average and total wellbeing. So, let’s say everyone is now at plus 960. So, we’ve moved from a somewhat unequal society to now a perfectly equal society, but also where people on average are better off.

LEVITT: Okay So, you compare A-plus to B. And again, now B seems it’s got to be better than A-plus because it’s got less inequality and, on average, people are better off. And those are two things that people tend to like.

MACASKILL: Exactly. So, if we think that A-plus is better than A and B is better than A-plus, then we’ve got to conclude that B is better than A, but that means we’re concluding the population B, with a population that’s twice as big but with slightly lower average wellbeing, is better than the population that’s half the size but with slightly higher wellbeing. So, remember, population A was a billion people with plus-1,000 wellbeing. And now population B is 2 billion people with plus-960 wellbeing.

LEVITT: Okay. And that doesn’t seem so repugnant yet, but I know where you’re going because you could take this ad infinitum. And by the time you’re done, you’ve created a huge population where everybody’s at a plus one. And by the very same set of arguments that seem so plausible going from A to A-plus to B, you’ve now said that this world that’s completely overrun with people who are all, like, virtually miserable, that dominates that nice world that you described in A. That’s what they call the repugnant conclusion, right?

MACASKILL: That’s the repugnant conclusion. You get from what seemed like utterly incontrovertible premises, this kind of A, A-plus, B argument that I gave you, just repeating that over and over again. But yet, you get this very implausible-seeming conclusion, which is that it’s better to have enormous numbers of people whose lives are just barely worth living than the original billion people with absolute bliss, plus 1000 lives. And that’s a paradox.

LEVITT: Now, obviously, there’s no good answers, right? No one has yet resolved this paradox in a way that makes people happy. Is that true?

MACASKILL: That’s true. In fact, people generally regard population ethics as maybe the hardest area of ethics, because you’re just dealing with these literally logical inconsistencies between different principles, different premises that seem incontrovertibly true. And that’s a really tough thing to grapple with. And so, what should we do in light of that? I think what we should do is give some amount of weight to a variety of the different answers. Lots of those different views. And again, try and come up with the best compromise. So, if I’m allowed to use the economist language again, I think what we should do is maximize expected value, but where we’re uncertain about what values we ought to be following on. And if you do that, then, I think you do end up with a conclusion that at least for a sufficiently good life, perhaps not a life that’s this kind of just eating potatoes plus one kind of life, but perhaps like a pretty good life, that you do conclude that’s actually a good thing in and of itself to bring into existence in the world.

LEVITT: Okay So, that’s interesting to me, personally, because I have a lot of kids. And I outraged many of my listeners when I said that I didn’t think I needed to feel guilty about that. And many of my listeners wrote in, angrily, saying that it was irresponsible to our poor planet to keep procreating. But as a philosopher, it seems you would say I’m doing the right thing for humanity, as repugnant as it might seem to my listeners. Is that correct?

MACASKILL: I think this argument against having kids on climate-change grounds is not a terribly good one. One is you’ve got to think about the resources you spend on your kid, like what would you have been spending them on anyway? Those things will also have a carbon footprint. Secondly, you can also just offset your kids if you want. But then really more importantly, looking just at the harms of a child that grows into an adult and the climate-related harms is just a very small aspect of the total impact that a person has on the world. They also contribute productively to society. They help move forward innovation and moral progress. They pay taxes. All of these things are good contributions they have to society. And if you think that over time, the world has actually been getting better rather than worse, then you should think that the net effect of additional people has been positive. But then the final thing of all is just this argument doesn’t take into account one of the benefits you have by having kids, which is the benefits to the kids themselves. So, I’ve personally had a good life. I feel glad to have lived. Empirically, that’s true for most people. And if you’re a good parent and you have kids that grow up to have good lives, that’s a benefit you’ve provided to them. Not only that, of course, then you can also educate them to be productive members of society who are going onto make the world a better place.

Well, Will MacAskill certainly does a better job than I did myself justifying my decision to have so many kids. But it’s too bad that the moral framework he uses to defend me is called the repugnant conclusion. I don’t agree with everything that Will argues about long-termism, but I find his ideas and his way of defending those ideas fascinating. In two weeks, we’ll be bringing you an episode with the prolific documentarian Ken Burns. Ken’s made epic series about the Civil War, Vietnam, jazz, baseball, and in September, his newest series will premiere. It explores the United States role before, during, and in the aftermath of the Holocaust. Is there anything new to say about the Holocaust? I’m interviewing Ken next week and I can’t wait to find out. Until then, take care.

People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and Freakonomics M.D. All our shows are produced by Stitcher and Renbud Radio. Morgan Levey is our producer and Jasmin Klinger is our engineer. Our staff also includes Neal Carruth, Gabriel Roth, Greg Rippin, Alina Kulman, Rebecca Lee Douglas, Zack Lapinski, Julie Kanfer, Eleanor Osborne, Jeremy Johnston, Ryan Kelley, Emma Tyrrell, Lyric Bowditch, Jacob Clemente, and Stephen Dubner. Our theme music was composed by Luis Guerra. To listen ad-free, subscribe to Stitcher Premium. We can be reached at pima@freakonomics.com, that’s P-I-M-A@freakonomics.com. Thanks for listening.

*          *          *

MACASKILL: That was the time that I nearly died because I accidentally put my foot in a skylight and fell through and had this like very deep wound in my side. And obviously, if you’re going to have a major injury, being on the top of a building is not the best place to be.

Read full Transcript

Sources

Resources

Extras

Episode Video

Comments