Episode Transcript
We have a bonus episode for you this week. I just wish there was a happier reason for it. My good friend Daniel Kahneman passed away recently and like so many others, I’ll miss him dearly. As a small tribute to Danny, we’re replaying an episode I did with him back in 2021. Listening back over it. It does capture so much of what was special and unique about Danny. I’ll warn you in advance that the sound quality is not great. We were just coming out of Covid, and Danny taped the conversation in his apartment while his partner went about her daily routines in the background.
* * *
Danny Kahneman is an intellectual giant. Trained as a psychologist, and not an economist, he nonetheless received the 2002 Nobel Prize in economics. But a Nobel Prize doesn’t begin to capture Danny’s influence. Incredibly, of all the research papers ever written in the social sciences, Danny and his co-author Amos Tversky have written not just one but two of the 10 most cited articles of all time. I’ve had a pretty successful career in academics, but Danny has 26 papers that have more citations than my most heavily cited paper. And he proved he could connect with a popular audience as well. His 2011 book Thinking Fast and Slow, was a blockbuster bestseller.
Welcome to People I (Mostly) Admire, with Steve Levitt.
I met Danny Kahneman for the first time in 2009. I was in New York City shortly after the release of Superfreakonomics, the follow up to Freakonomics, and that book was proving to be quite controversial because of a chapter we had written on climate change. Anyway, I was eating dinner in a New York City restaurant and, completely by chance, Danny happened to be also dining there. A mutual friend introduced us, and I was shocked when Danny mentioned offhandedly that he had actually just been reading Superfreakonomics. I asked him what he thought of it. He replied, “I think it will change the course of mankind.” I couldn’t believe it — one of the world’s greatest thinkers thought my work was changing the course of mankind? But Danny wasn’t finished. After a brief pause, he continued, “I think your book will change the course of mankind and not for the better.” Well, it turns out Danny didn’t much like our climate change chapter either, but I was still flattered that he thought my book might matter enough that it could cause the downfall of humanity. From that rather inauspicious beginning, somehow, Danny and I eventually grew to be friends and later business partners in a little consulting company we called The Greatest Good. At that company we tried to make the world a little better place by applying cutting-edge ideas in economics, but we discovered it wasn’t so easy to get people to pay us to do that. So we closed that company a few years back, and since then I’ve seen a lot less of Danny than I’d like.
LEVITT: Danny, it’s so great to talk with you again. It’s been way too long and I miss you.
KAHNEMAN: Mutual.
LEVITT: So it’s been roughly a decade since Thinking Fast and Slow was published. And even though that book turned out to be a massive bestseller, I still can’t believe you wrote another book because I remember it practically killed you writing the last one, wasn’t it torture?
KAHNEMAN: Well, Thinking Fast and Slow was worse than this one because it was a lonely endeavor. This time I worked very closely with one of my collaborators — Olivier Sibony, he is in Paris and we were on Zoom for an hour or two a day for the whole period. Because of Covid — the virus helped us a lot. Previously we have been meeting once a month for several days and it turned out that Zoom was way more efficient.
LEVITT: One thing that I really admire about you, Danny, is that you are a lifelong learner. So, for instance, I would have fully expected that you would have learned from your last book, from Thinking Fast and Slow, that you needed co-authors and you went out and found co-authors. But much more seriously, what’s striking about your new book, which is called Noise, is that while it builds on your enormous lifelong academic accomplishments, many of the ideas are new and you’ve created these ideas in the last 10 years. And I find that to be really admirable, that you weren’t just rehashing your old work.
KAHNEMAN: Oh, I mean, Thinking Fast and Slow was such an experience that after that I had forgotten everything that was not in the book, so I really need a new thing. And Noise was quite new, actually.
LEIVITT: So could you give us the three-minute version of Noise, just to give listeners a taste of what’s in the book?
KAHNEMAN: Sure. Noise is unreliability. And we speak of system-noise when there is a system such as the E.R. is a system, an insurance company that sets premiums is a system. And noise in that system is when individual people who are supposed to be interchangeable in terms of their roles actually give very different answers to the same problem. So you have a company and whoever interacts with the company is facing a lottery, which employee or which member of the organization you will interact with and that lottery is noise.
LEVITT: Okay, and so just to be totally clear, this is different from bias, this is orthogonal to bias.
KAHNEMAN: This is really the complement of bias. And in terms of the standard way of measuring accuracy and measuring error, bias and noise are completely independent of each other and there is an equivalence between the amount of error that bias produces. Bias is really the average error, and noise is the variability of error, the standard deviation of errors.
LEVITT: So what you’re talking about is that when you interact with something, whether it’s a company, or the medical profession, or a grocery store, the goal would be that there’d be no bias and the same set of inputs should lead to the same output and the right output. If you do that, then that system is both without bias and without noise.
KAHNEMAN: Perfect, yes.
LEVITT: Okay, and so your book is about noise and noise is really the sort of ignored step-sister of bias. I was lucky enough, when we were partners together at a little firm called The Greatest Good, to have a front-row seat watching you begin to work out the ideas that eventually appeared in Noise. And I think the original inspiration came from some work you were doing for a large insurance company. Could you describe that work?
KAHNEMAN: Well, in that insurance company, the topic came up actually, do their underwriters operate in the same way? So we ran an experiment, we called those experiments “noise audits,” and the idea was they produced some cases, presented those cases to several dozen underwriters, and had each underwriter select a premium that was appropriate to that risk. What made the issue interesting was that I also asked the executives in that company, how much of a difference they expected. And the question was put as follows, supposedly two underwriters at random, you compute the average of the two premiums they set and you divide the difference by the average. So in percentages, how large do you expect the difference to be? Now, it turns out, there is a number that seems to come to everybody’s mind as a tolerable amount of noise, and that’s roughly 10 percent. Now the correct answer, in two units — at that insurance company was roughly 50 percent, five zero. Five times larger than people had expected.
LEVITT:I think it’s easier maybe to think about payouts so if someone’s fallen and gotten hurt and now the insurance company has to make a payout. You’re saying you give two claims adjusters the exact same case, all the information about the circumstances, and if a fair value of a payout was $20,000 —
KAHNEMAN: I would expect the median difference between two claims adjusters would be about $10,000. So one of them would say 15 and the other will say 25.
LEVITT: And the executives expected the number to be between $1,000 and $2,000, not the $10,000 that it turned out to be. Okay. And I have to admit, when you started this project, Danny, I also thought the numbers would be small, and I was even more pessimistic because I thought, well, even if the numbers are big, nothing’s going to happen. Firms always find a way to explain away results and say it was a fluke, it didn’t matter, and to ignore the kind of advice you were giving them, but I have to say this was very different in this case.
KAHNEMAN: Oh, well, they viewed it as their experiment, it was a challenge. They took it on. They created the materials. And they carried out that experiment to find out what they were doing, with our help. And because it was set up at their job, they really accepted the results.
LEVITT: By their own accounts, when they went back and tried to tally up the value of the work you had done. I seem to remember that they internally valued that at something like $2 billion. Am I not correct?
KAHNEMAN: They realized that noise was costing them probably over a billion dollars a year. The cost is large.
LEVITT: So we were part of a little firm called The Greatest Good that consulted with big companies and tried to help them. And honestly, I think you would probably agree, Danny, we weren’t very good at it in the sense that we didn’t actually get very many companies to change very much. But this was the one case in the, say, six or seven years we did this, where there was just massive change.
KAHNEMAN: Setting the wrong number for a claims adjuster is actually a risky thing for the company. You set it too high, you’re going to be too lax and you’ll pay too much. You set it too low, very likely there’ll be litigation, and so getting it right is really quite important. And when there is that amount of variability, that means that most of the time they don’t get it quite right.
LEVITT: One of the things that I found really refreshing about that project was the solutions actually turned out to be relatively straightforward and intuitive. They could, with relatively little effort, completely change their decision process to get rid of a large chunk of the noise.
KAHNEMAN: They did implement some ideas, and indeed, the ways of cutting down on noise are pretty obvious. You want to have a process that will make people think in the same way and at the same time, you want that process not to be too bureaucratic because you don’t want people to fill forms.
LEVITT: So one of the things that helps with what you call decision hygiene, which I think is a really good name for the process of thinking sensibly about outcomes, is that you want to break a problem down into small pieces, to not look at a big file and just blurt out a number, but rather to take a number of small steps that are pieces of the overall decision, independently, and put them together.
KAHNEMAN: Yeah. And that is very general about large judgment problems. The intuitive way of going about that is to assimilate a lot of information and then to trust your intuitive system to come up with a solution, and this is clearly not optimal. We know that people do better than that if they have a plan, if they break up the problem, if they evaluate each part of the problem separately and independently from the other parts, and if they postpone their intuition until they have enough information.
LEVITT: When I was a student at MIT, I took a one-day course on guesstimation. And the professor who taught it was from the sciences and I guess he had grown up on a farm and he and his brother each had to drive a tractor all day and they would get bored so their job at the beginning of the morning was to come up with some number that the other one would have to guesstimate and they’d come back at lunchtime and see how close they had gotten to it. And he gave essentially the exact same advice you’re saying, which is you should divide the problem into as many independent pieces as possible. And he said when you tend to make a mistake on one of those little pieces, it’s often offset on another piece. So, for instance, if you try to imagine the total mass of all the trees that are growing on the planet, if you start by saying, “Well, how many trees are there. And then how much does each tree have of mass?” That the people who think of big trees tend to guess that there are fewer trees out there, but they have more mass per tree, and so those two pieces offset. The other thing that he said that isn’t mentioned by you is that as soon as you’ve gone through and gotten one set of estimates, he said, “Stop and start over with a completely different approach to the problem.” He said in guesstimating anything, you should never be off by an order of magnitude, even if you have no idea what the problem is, which turns out to be to be true that you literally can guesstimate anything if you use that process.
KAHNEMAN: That’s amazing. You know, it was the physicist Fermi, I think was a master at this breaking up problems. And Philip Tetlock, who does super forecasting, teaches his super forecasters to Fermi-ize that is, to break up a problem and estimate the different parts. So, yeah, there is a procedure.
LEVITT: And another thing that seems quite important is if you’re going to have multiple people assessing a problem, the importance of them doing it independently rather than in discussion with one another, for instance.
KAHNEMAN: That is really vital, and this is something that people really do not like. In the book we tell the story that we heard from Nathan Gonzalez, a psychologist who was consulting with universities on the process of admitting students. And they were grading student’s essays, candidates. He noticed that one person would read the essay, put on a grade, pass it on to the colleague with a grade on the front page of the essay, and he said, “Look, I mean, this isn’t the optimal way of doing it. You should put the grade on the back of the essay, so the next person will not see it.” And they told him, “Oh, we used to do it that way, but there was so much disagreement.” Now that is actually what happens with noise — people don’t want to detect that there is noise.
LEVITT: It’s a challenge in that the problem you’re trying to solve in the book, really is how do you get rid of noise? And yet if that’s not the objective of the system you’re working with, then it’s impossible to ever accomplish that objective. I mean, that’s something you’ve run into over and over, right? It’s so often typical economists decide what people should do before they actually talk to real people.
KAHNEMAN: Well, this is really what we’re hoping for in terms of the fate of the book. Is for people to become convinced that there is a problem that deserves looking at. And our advice is to measure the problem. Do a noise audit. Find out if in your organization there is as much noise as has been found in other organizations, and then try to estimate how costly it is. And actually one way of estimating it is by comparing it to bias, how costly a bias would be of the same magnitude and then maybe people recognize there is a problem and will want to do something about.
LEVITT: So I really loved this book and being totally honest, I didn’t expect to. I think like many people, I carry this feeling inside of me that noise just isn’t that important. And ultimately you convinced me in many of the settings — but I have one criticism of the book that I want to ask you about, because I think whether a noise matters or not depends critically on what the consequences are to the people who face noise. So, for instance, in the insurance example that you talked about, the insurance company loses when they’re too optimistic and they lose when they’re too pessimistic because there is an adversary who will sue them and litigate and so they don’t benefit from it. But one of your leading examples is about noise in the criminal justice system. And you make the correct observation that which judge you get matters a lot for your prison term, and it even matters maybe what time of day you see the judge or whether the judge’s favorite football team won or lost on the previous Sunday. Okay. But here’s what I would say, in a system where I’m supposed to get five years in prison and sometimes I get three and sometimes I get seven, well, I’m just about as much happier when I get three years, I mean I’m unhappy when I get seven. So if there’s no bias as a criminal, I wouldn’t care that much about noise. And I say, wouldn’t pay that much to shift a system that didn’t have noise. How do you respond to that? Do you see the difference in why I don’t like that example nearly as well as I like the example about insurance?
KAHNEMAN: Your attitude is unusual, I would say. For most people it would be an essential aspect of justice that similarly situated people will be treated identically, so that’s a principle of fairness. You want two people who have committed the same crime to get the same sentence, so you surprise me.
LEVITT: So I think people are confused. One, I think they’re thinking about bias, and so if they hear that one person got seven years and one got three, they’re very concerned that maybe it happened because the one who got seven was African-American, the one who got three was caucasian, okay. And obviously that’s a huge, huge concern and pervades criminal justice. So I think that many people who are responding to the noise in the criminal justice system are actually fearful that the noise is something other than noise.
KAHNEMAN: Well, suppose that the sentencing system would go as follows, you get evaluated by a judge, he sets a sentence, and then a lottery is run, that adds or subtracts one or two years in prison from the sentence that the judge said. Would you think that that system is desirable or acceptable or tolerable? I mean, life is unfair, but the justice system should not be unfair .
LEVITT: In this setting, bias is really worrisome. I think bias in criminal justice undermines our societal values. But noise — I mean, I’m just actually thinking about myself as, like, imagine myself being sentenced, how upset would I be if I knew that I would get two years longer because I was male or because I was over the age of 50? I would be furious about that. But if you said, “Hey, we’re going to give you either exactly five years or an average of some draw between three and seven,” I don’t know — personally, I just wouldn’t care that much about that kind of noise.
KAHNEMAN: I think few people would take that bet, but I’ll say something else: Noise is produced by biases. That is the individual judge that you are encountering has a way of thinking that is different from the way of thinking of other judges, and in that sense, you can speak of each judge as biased in a particular way.
LEVITT: Maybe I’m being way too much an economist. Let me give you another example, so let’s say when I went to the store, there was a particular store, and instead of giving me back my exact change, they had a randomizing device where they either rounded up or they rounded down to the dollar, so that I never got back my right change. I always got more or less. But it was fair, they didn’t cheat me — like it wouldn’t drive me away from that store. It wouldn’t bother me at all. Might be kind of fun to have that element of shopping. I think that’s another case where, I think I view that as a little bit like the criminal justice system, where if I benefit as much from the good stuff as I’m hurt by the bad stuff, then I tend not to worry about noise.
KAHNEMAN: In your particular example, if the store does this regularly, then it’s a repetitive game and that’s completely different. But imagine a world in which some of the people will be selected completely at random, at red lights and either given a thousand dollars or be fined a thousand dollars for no reason whatsoever, you wouldn’t want such a world. My sense is that the sentiment that I represent is more common and that a noisy justice system where the sentence you get would be plus or minus four years is really intolerable, but, interestingly enough, it’s the system that judges like, they like the noisy system. When there was an effort to impose uniformity in sentencing, judges hated it. So, I don’t think I agree with you on the morality of it, but the fact is that we all live with that system which is extraordinarily noisy.
LEVITT: Yeah, I think another psychological reason why getting rid of noise is valuable, is that the people who get lucky, who get three years instead of seven years, they don’t actually appreciate that they got lucky. They think it was because they deserved it, and the people who get seven are furious, they’re outraged, they have newspaper articles written about them. And it has the sense of undermining how society functions.
You’re listening to People I (Mostly) Admire with Steve Levitt, and his conversation with Nobel laureate Daniel Kahneman. After this short break, they’ll return to talk about Daniel’s work with Amos Tversky that resulted in the creation of behavioral economics.
* * *
Before we continue, I think it’d be helpful to hear a bit about Danny’s background. Danny Kahneman was born Jewish in Europe in 1934, and he spent much of his childhood fleeing the Nazi regime. His family then moved to Palestine shortly before the creation of the state of Israel, where he studied psychology and eventually became a professor at Hebrew University. It is there that he first met his frequent co-author, Amos Tversky. Danny describes working with Amos as “magical” right from the beginning. Now, behavioral economics is all the rage these days. I can’t tell you how often someone, whether it’s a CEO, a government official, or an economics graduate student, tells me that behavioral economics is a solution to whatever problem they have. But without Danny and his co-author Tversky, who died in 1996, behavioral economics likely wouldn’t even exist. The work they did was so creative and unconventional, it just transformed the way people think about the world. I’m so curious to hear about how they got started on that path, where did their ideas come from, and also to ask Danny how he felt about Michael Lewis’s book The Undoing Project. Which tells the story of Kahneman and Tversky’s partnership in a very intimate and personal way. You’ll notice there’s some background noise in the next part of our conversation.
KAHNEMAN: Oh excuse me…
Danny actually now lives with, of all people, Amos Tversky’s widow Barbara.
KAHNEMAN: …Barabara, you’re making noises. Sorry about that. I think there was some banging in the kitchen.
LEVITT: So I would love to go back and talk a little bit about history, Danny, so you and Amos Tversky more or less created what became the field of behavioral economics from scratch in the 1970s. For those who aren’t familiar with behavioral economics, in your own words, what is that field?
KAHNMEN: Well, we were important in a field that is called the psychological study of judgment and decision making. It became behavior economics when economists became interested in it. And now behavior economics is populated by economists who know some psychology and psychologists who have been teaching themselves some economics — mainly it’s really applied social science. And it’s the study of human characteristics, and in order to interact with humans, and this is what governments have to do, organizations have to do — we’ve got to understand them, and behavior economics is really an attempt to understand humans, so that you can interact with them better.
LEVITT: So old school economists also thought what they had were models of humans, but I think the old rational models of humans, the point that you and Amos, and later others have made is that those models are actually terrible predictors of how people behave in the real world.
KAHNEMAN: Well, yes. Really economics is a logic of decision making and then you have the assumption that people behave logically and people don’t. So when you do the psychology of how people actually make judgments and actually make decisions, they don’t follow logic. They’re not stupid and they’re not irrational, I hate the word irrationality. People are quite reasonable, but they are not logical.
LEVITT: And I don’t think, Danny, I appreciated you enough when we were spending a lot of time together. I knew you were brilliant because I observed it, and I knew about behavioral economics, but it was only more recently that I really went back and I looked at the body of knowledge that you were creating in the 1970s, and you and Amos had a gift that I really cannot understand in that over and over and over, you figured out how to take an incredibly complex situation, to ask simple questions in which people would make very predictable mistakes and change the way academics and eventually the general public thought about the problem. How did you do it, over and over and over?
KAHNEMAN: Well, I mean, basically, we were studying ourselves. We were spending hours every day together and we were inventing problems where although we knew the correct answer, we would be tempted to give the wrong answer. And we were looking for problems for a single question that would tell a story, so that people who make that mistake — you know something about the way they think. Actually it was that feature of our work, the fact that we had very simple questions that gave it some impact across disciplines, because I don’t think you could get economists interested in psychological experiments. But when it had the character of a riddle, then everybody finds riddles interesting.
LEVITT: Yeah. Essentially, you were academic storytellers in a way that’s very unusual.
KAHNEMAN: Yeah. I mean, we asked some people, how much would you pay — that was in the period where there was acid rain that was polluting the lakes, so, it’s about 40 years ago. This was in Canada and we asked people, how much would you pay to clean one lake from acid rain pollution? And we asked other people, how much would you pay to clean up all lakes in Ontario from acid rain pollution? And people gave roughly the same number. And that’s caused difficulties for some interpretations of people’s attitude to public goods.
LEVITT: So let me ask about that, Danny, because why in the world would you think to ask that question? Why did you think that people would be so bad at distinguishing between a single lake and an entire province? Because to me, that’s such a far fetched result I couldn’t have imagined asking in the first place.
KAHNEMAN: Well, actually it’s a fairly straightforward psychological prediction, psychologists think that we think in terms of prototypes. So when there is a category, we have a prototype in mind, and so the prototype for cleaning pollution is cleaning one lake. That’s the prototype, and you have an emotional reaction to the prototype, and when I tell you, “think about cleaning all lakes in Ontario,” you make that into a prototype. So those two questions are much more similar than they appear. And if you’re a trained cognitive psychologist, that’s not a surprising result.
LEVITT: Well, it’s a good thing we have psychologists and economists, because an economist in a million years couldn’t have made that leap. So I see the huge influence of psychology on economics. Has economics likewise had an enormous impact on psychology?
KAHNEMAN: No.
LEVITT: Why do you think that is?
KAHNEMAN: Well, because economics depends on psychological assumptions. There are assumptions about who the economic agent is, and you need those assumptions in order to make economic predictions about markets’ work. So you need to make assumptions about people. Psychology does not depend on economic assumptions. So clearly, psychology is more of a foundation, a discipline for economics than vice versa. Where economists are clearly far better than psychologists is in their methods. And I think that there is some influence of economics on psychology in terms of rigor, but I think in the background, the way that economists do things has been quite important.
LEVITT: So, you know, there’s come to be a tremendous focus on using behavioral economics to create behavior change, to get either yourself to do something you don’t want to do or get someone else to do something they don’t want to do, whether it’s the U.K. Nudge Unit or Angela Duckworth and Katie Milkman have their Behavior Change for Good. I would say I am approached by a company once a week that says, “Hey, we would like to use behavioral economics to try to make our customers, our clients do something different.” But by and large, it seems to me empirically that the expectation about what behavioral economics can do for behavior change has outpaced the reality, is that your view as well?
KAHNEMAN: Absolutely. The successes of behavior economics are small, and typically what you can accomplish with behavioral economics is a small change that costs virtually nothing. Changing behavior is extremely difficult, and there are many optimists in psychology, but in behavioral economics, I think people are fairly reasonable about what they’re expecting and they are not expecting to be able to make big changes quickly.
LEVITT: Yeah, I really think it’s a corporate mindset. I think because of books like Thinking Fast and Slow, you’ve managed to convince lay people that behavioral economics is the most powerful tool one has ever encountered, I mean, persuasion is such a hard thing to do, but you’ve been very persuasive.
KAHNEMNA: I mean, you’re absolutely correct if you’re implying that we’ve been too persuasive . There are different particular areas where you get very substantial effects, but in many areas you want to achieve change and it’s extremely difficult.
LEVITT: So the author, Michael Lewis, wrote a book published in 2016 called The Undoing Project, which is the story of your remarkable collaboration with Amos Tversky. Did you read the book?
KAHNEMAN: Of course.
LEVITT: How did it feel? It’s a very intimate book describing the, I would say, the surprisingly complex relationship that you and Tversky had.
KAHNEMAN: It’s a true story, and there is an element of fiction so that when somebody writes a story, you dramatize it. There is more contrast, there is more conflict. Especially, he made Amos and me more different than we were. I mean, I collaborated with this book willingly. It’s true, it was complex, it was the most significant relationship in my life and it changed my life completely, but there were some difficult times.
LEVITT: I know whenever I read a story written about me or when I’ve written about other people, there’s a tendency to react very strongly to small details that others wouldn’t notice. So, for instance, Dubner and I wrote about an economist who sold bagels and donuts, and we wrote a long, very flattering profile. And when it came out, he was outraged because Dubner had noted in the piece that as we drove down the highway, he was going 72 miles an hour. And he was furious, “I never drive more than 70 miles an hour.” When nobody else would care about that at all. Did you have any of those moments maybe in this book?
KAHNEMAN: Yes, of course.
LEVITT: Any you remember you want to talk about? Any particular things that really set you off?
KAHNEMAN: No.
LEVITT: I know, I don’t like to talk about the ones that embarrass me. Was it an easy decision to cooperate with Michael Lewis? I could imagine reservations.
KAHNEMAN: Well, no, it wasn’t easy. Barbara Tversky was Amos’s widow and she and I are living together now. So she was banging in the kitchen. And the basic story, which Michael describes, were that when we came to the United States, Amos got a disproportionate amount of credit for the work we did, and then he died in 1996. And from that time, I have gotten a disproportionate amount of credit for the work that we did, by a lot. And my understanding was that, you know, there had to be some redress, and I felt that I was duty bound, to tell the story in a way, you know, I didn’t control the story in detail, but it brought Amos back into the picture in a way that the Nobel Prize, that I had gotten alone because he was no longer alive, all of that had created distortion that needed correcting.
LEVITT: That’s interesting, I hadn’t thought about that, but I do think that it is a wonderful piece of history for people to understand. And for me, it was really eye opening. I mean, among other things, Danny, we spent so much time together and I had never done the math to think about the fact that you must have spent your youth as a Jew in Europe during World War II, and I feel incredibly bad about the fact that I never asked you about that. It must have been awful.
KAHNEMAN: You know, I survived. And relative to many others, I had an easy war. It wasn’t actually easy. But, I survived. And I don’t attribute anything of what I’ve done to the difficulties of my childhood.
LEVITT: I get the sense you don’t like to talk that much about the past and the difficulties, which is what’s interesting, that so much of it comes out in the Michael Lewis book.
KAHNEMAN: I mean, you know, the collaboration with Amos Tversky was a fascinating chapter in my life. You find the person who is not exactly a soul-mate, a mind-mate, and with whom you have an enormous amount of fun and you are doing creative work and you know, you’re doing good work and you’re laughing all the time. It was an exceptional collaboration. It was the luckiest thing that ever happened to me, and I was happy to talk about.
LEVITT: So I’m curious, why did you write Noise, Danny? I understand why you wrote Thinking Fast and Slow, because that was an unbelievable tome that really collect so much knowledge into one place that might have been hard for people to find otherwise. But you’re not, as young as you used to be, you could have done so many things with your time. What drove you to want to create this book?
KAHNEMAN: Actually, you know, it was a collaborative effort. So Olivier Sibony and I and another friend, Dan Lovallo, we started out thinking about what could be done to prevent noise. And they were consulting for McKinsey. We met, we did something together, and the book really started from that. It started from the idea of how would you advise organizations to cut down on noise and — here my age played a role. If I had been younger, I would have started to study noise, I would have run experiments. But I was too old to do that. So the only thing we could do, really, was to write a book.
LEVITT: What motivates you, Danny? I’ve never really been sure.
KAHNEMAN: Curiosity, really. And what mistakes have I been making? That I’m very curious about. I like changing my mind and I have plenty of occasions to change them.
LEVITT: Why do you think you like to change your mind when virtually everyone else fights desperately to cling to what they believed yesterday?
KAHNEMAN: I mean, it’s interesting that for me, when I change my mind is the pure experience of having learned something. That’s when I’m sure that I’ve learned something. Yesterday, I was stupid and now I’ve seen the light. And so that’s the experience of changing one’s mind, and if you view it that way it’s quite pleasant.
LEVITT: So you’re 86 years old, Danny, and still, obviously to anyone who is listening to this conversation, as sharp as you’ve ever been, do you have any advice for people who aspire to stay mentally fit as they age?
KAHNEMAN: You know, this is really a case of use it or lose it, and that was one of my reasons for wanting to write the book, was that it would use my mind. And it’s been very good for me. When you keep thinking you deteriorate more slowly.
LEVITT: So what, if anything, would you tell a young person that might help them to make choices that would lead to a life worth living?
KAHNEMAN: You have to follow what you are inclined to do, and you have to be willing if you’re a scientist or if you’re a researcher, you have to be willing to discard ideas that don’t work. And if you find yourself very obstinately sticking to ideas that don’t work, you’re in the wrong profession. That’s one piece of advice that I would give, but otherwise I don’t believe in giving advice.
I’m no Danny Kahneman, but I am curious and I’m definitely still using my brain and I don’t hesitate to abandon bad ideas. Hopefully that means I’ll still be going strong at the age of 87.
* * *
What you just heard was an encore presentation of my conversation with Daniel Kahneman from back in 2021. Next week, we’ve got a brand new episode featuring Monica Bertagnolli. She’s the head of the National Institute of Health and a cancer researcher. As always, thanks for listening and we’ll see you back soon.
* * *
People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and The Economics of Everyday Things. All our shows are produced by Stitcher and Renbud Radio. This episode was produced by Morgan Levey and mixed by Dan Dzula. Our theme music was composed by Luis Guerra. We can be reached at pima@freakonomics.com, that’s P-I-M-A@freakonomics.com. Thanks for listening.
LEVITT: Wait, there was something that wasn’t in the book, Danny? I thought everything was in that book.
KAHNEMAN: Well, there were a few things, but certainly I cannot remember any of them.
Sources
- Daniel Kahneman, professor emeritus of psychology and public affairs at Princeton University.
Resources
- Noise: A Flaw in Human Judgment, by Olivier Sibony, Daniel Kahneman, and Cass R. Sunstein (2021).
- Thinking, Fast and Slow, by Daniel Kahneman (2011).
Extras
- “What’s the Secret to Making a Great Prediction?” by No Stupid Questions (2021).
- “The Men Who Started a Thinking Revolution,” by Freakonomics Radio (2017).
- “How to Be Less Terrible at Predicting the Future,” by Freakonomics Radio (2016).
Comments