Angela DUCKWORTH: Hi, Stephen.
And Katy Milkman …
Katy MILKMAN: Hi!
They wanted to solve a problem …
MILKMAN in a previous Freakonomics Radio episode: A problem that, if we fixed it, could truly solve every social problem we could think of.
The problem is: ourselves.
DUCKWORTH in a previous Freakonomics Radio episode: In other words, the problem with human beings is that they’re human beings and that they repeatedly make decisions that undermine their own long-term well-being.
So Duckworth and Milkman started putting together a project …
DUCKWORTH: We’re calling it Behavior Change For Good with a deliberate double entendre, for “good” for “permanent,” but for “good for you.”
The idea was hatched in response to a competition from the MacArthur Foundation. The prize: a $100-million research grant. But the Behavior Change for Good project didn’t even advance to the final round.
DUCKWORTH: I was not only devastated, but I was surprised. Arrogance. I don’t know — narcissism. I was shocked to hear that we were not advanced.
But by then, Duckworth and Milkman were already recruiting a dream team of fellow researchers. They’d signed on a bunch of corporate partners. And they’d fallen in love with their project. So: they found some other money — not $100 million, but enough to get it going. Their mission: to determine the best behavior-change practices in three realms. Number one: health.
MILKMAN in a previous Freakonomics Radio episode: Think smoking cessation, healthy eating, increasing exercise, reducing alcohol consumption.
Number two: education.
MILKMAN in a previous Freakonomics Radio episode: Can we get kids to have better outcomes in school and stick to school?
And finally: personal finance.
MILKMAN in a previous Freakonomics Radio episode: Can people make better financial decisions on a daily basis so they’ll have better financial outcomes?
After months of planning, Duckworth and Milkman convened the first summit of their academic dream team.
MILKMAN: Hi, everybody. I’m super excited…
They met in an airy conference room in a sleek building in the University of Pennsylvania medical center. There were lots of psychologists, several economists, a few computer scientists and M.D.’s, some business and marketing professors, an education scholar — and us. So, coming up today: an inside look at how to put together a massive research project whose ambitions are even larger.
DUCKWORTH: We think — because this is like the Hall of Justice, with all the superpowers in one place — that we might have a shot at doing something that hasn’t been done before.
You’ll hear some of the problems they’ve identified.
David LAIBSON: We’re spending hundreds of billions of dollars in colleges and, I think, we’re not getting much value for our money.
You’ll hear some of the more obvious challenges:
Adam GRANT: Most behavior change is actually not desirable, and that’s one of the major things that stands in our way.
And: the risks of such a high-stakes enterprise.
Danny KAHNEMAN: If they fail, that’s going to be quite costly for a long time.
That’s coming up, right after this:
* * *
This project, Behavior Change for Good, is noteworthy — to me at least — because it represents the next logical step in a revolution that’s been brewing for a few decades. It began with the research of Danny Kahneman and Amos Tversky, a pair of Israeli psychologists who changed the way we think about thinking and decision-making. The revolution was furthered by the economist Richard Thaler, who believed that his field should acknowledge that people rarely behave as rationally as economic models predict. That people, as Angela Duckworth says, “repeatedly make decisions that undermine their own long-term well-being.”
DUCKWORTH: … repeatedly make decisions that undermine their own long-term well-being.
And, therefore, that it might be wise to help people make better decisions — for themselves and for society. Maybe it’ll take a nudge. Maybe it’ll mean expanding a choice set — or shrinking it. Maybe it’ll mean redesigning how the incentives in a given situation are set up, whether through smart algorithms or old-fashioned human touch. Essentially, it’s about helping people get the satisfaction they need in the short term and the outcomes they’ll want in the long term.
This is the revolution that’s been happening, a behavior-change revolution. It began in academia, where it has come to be greatly valued. Kahneman won a Nobel Prize in economics in 2002; Thaler just landed the Nobel a few weeks ago. The revolution has been creeping into government policy shops and commercial firms. We’ve talked about this in previous episodes like “Big Returns From Thinking Small,” and “The White House Gets Into the Nudge Business,” and “The Maddest Men of All.”
So, yeah, the revolution is real, but it’s hardly mainstream yet. And that is what Duckworth and Milkman want to change. This will not be easy. Institutional and societal change, when it happens at all, usually happens slowly and with a lot of pushback. Also, behavior change is inherently a big ask, especially in the realms they’re going after. It’s a lot more fun to cut class and spend $10 on a cheeseburger today than to go to class, skip the cheeseburger, and invest the $10 for the future. But Duckworth, when she first addressed her fellow researchers at Penn, projected nothing but confidence:
DUCKWORTH: This is actually, I think, the best scientific problem and the most pressing social problem that anybody could be working on. We’re honored that you decided to show up and that you want to work on it with us.
She did acknowledge that thinkers from previous eras — from Aristotle to Freud — had wrestled with the problem of self-destructive behavior.
DUCKWORTH: But we’re working in the 21st century, so technology is something that was not an affordance for Freud. If you take any rough metric — any back-of-the-envelope calculation of how much does behavior play a role in urgent social problems — it’s huge. It suggests to us that if we can make a tiny dent in this problem, there’s the possibility of helping millions of people in truly material ways.
Katy Milkman was up next.
MILKMAN: Here’s the vision: there are lots of people out there running around in the world who would love to change their behavior. Maybe they have a problem with savings or they just can’t get themselves to take their medications or exercise or eat right or study hard. Whatever it might be. They’re out there. It turns out there are lots of gigantic organizations who are already serving these people — many of whom are our partners on this project.
Partners like Bank of America with 47 million customers, 24-Hour Fitness with four million customers, CVS Caremark. We just have unbelievable reach.
So, the idea is simple: the researchers gathered here today will partner with those organizations — and others — to run real-world experiments on millions of people that will reveal the best way to accomplish lasting behavior change. The main tool will be a custom-built digital platform. On the front end, it’s an interface between a bank or a fitness or pharmacy chain and their customers. On the back end, the platform is a powerful piece of research software for the academics.
MILKMAN: These organizational partners will market it and hopefully these people will come. They’ll sign up for our program. They’ll consent to be part of our research studies. We’ll get data about program participants’ daily decisions. We’ll be able to see what’s actually working. We’ll get that data as a pipe-stream forever. Then, of course, the end goal is to create a solution to Behavior Change for Good with lots and lots of A/B testing.
All the ideas you can come up with, we are going to have room for them. The goal is that eventually this platform will have the capacity to test anything we can dream up, with plenty of power. That’s the vision.
For some people, that vision may be frightening. You might ask, “Why is my bank or gym or drugstore turning me into a guinea pig? And what about my privacy?” This concern may strike others as a bit quaint, given we’re living in a time when billions of people willingly share their innermost preferences with Google and Facebook and Amazon. But still, it’s a concern.
Here’s another concern for some people. When Richard Thaler was beginning to popularize the behavior-change nudges that he got famous for, he called the idea “libertarian paternalism.” To some people, that might seem like a delicious oxymoron. But others might say, “Yuck! I like to make my own decisions, thank you very much, so I don’t need your paternalism.” Or they might just think: “Libertarians are right-wing kooks! Or left-wing kooks! Or something kooky!”
In any case, Duckworth and Milkman believe that the benefits of their behavior-change project will far outweigh the costs. And so, to get their summit rolling, they opened up the floor to a discussion about the digital platform they’re building, and how it would affect study design. Things got pretty nerdy real fast.
LAIBSON: How are you thinking about differential attrition?
Bridget TERRY LONG: Can we partition the data?
David ASCH: There would be some studies that might have more specific eligibility criteria.
MILKMAN: Yeah, that’s a fantastic question.
During the first break, I caught up with Duckworth and Milkman.
Stephen J. DUBNER: Briefly describe what just happened, your opening session.
DUCKWORTH: We grathered the scientists that we have been —
DUBNER: You grathered them?
DUCKWORTH: I grathered them. Yeah, we grathered them. Sorry.
DUBNER: Is that an academic term I’m not familiar with?
DUCKWORTH: Let’s try again. We gathered the scientists on our team to meet together for the first time. We presented to them an overview of what we hoped, both in the poetic terms of the sublime dream of solving behavior change and then also in the very practical terms of the digital platform that we’re building.
DUBNER: And Katy, talk about what the next 48 hours is meant to really accomplish.
MILKMAN: Well, we’re trying to get into the innards of this platform we’re building and make sure that it’s flexible enough to allow the scientists to test everything they might want to test, that it’s flexible enough to dream up any population they might want to recruit, and to make sure we’re not making, honestly, statistical mistakes that would be insurmountable and prevent us from accurately examining the evidence we collect.
DUBNER: Typically, this kind of a conversation [is] about features that should be different or that aren’t there or concerns, et cetera. Is that about what you’re expecting at this stage?
DUCKWORTH: Yeah. People had questions about the features, but they also had questions about the study design. I would say half of them were ones that we thought about and the other half were ones we hadn’t thought about.
DUBNER: That’s really valuable, then.
MILKMAN: That was gold. That was the most incredibly valuable 60 minutes we’ve had since we started this project.
DUBNER: Give me one that stood out. Either of you.
MILKMAN: We had thought about it, but I really like the point that some people may want to zoom in on a particular type.
LONG: With education, you might be trying to target a particular kind of student, for example, students who struggle in math. Can we use partner data to target recruitment?
MILKMAN: That’s exactly something we have to be flexible and allow on our platform. We haven’t solved it yet.
DUCKWORTH: One of the points that David Laibson, the economist, brought up —
LAIBSON: As the incentives get absolutely de minimis if we pay them so little, some of them are gonna —
MILKMAN: Be de-motivated.
LAIBSON: Feel manipulated.
DUCKWORTH: When you give people incentives, all you think about is increasing their motivation. But when the incentives are paltry, you can actually have a backfiring. People look at this tiny amount of money and they’re now going to be less motivated than they were if you had given them nothing.
Milkman pointed out one more feature of the opening session.
MILKMAN: Lots of people in this room have actually never met before. Many of them don’t know one another’s work because this is such a cross-disciplinary group. This is going to be the time when the minds meet and actually hear from one another about their decades of research and insights. Hopefully, it’ll spark some amazing collaborations.
Back to the conference room now for a series of speed talks. Where the researchers would give a thumbnail view of themselves and their specialties. Among the first ones: Adam Grant, the organizational psychologist, and author, from Penn’s Wharton School of business.
GRANT: Good morning, everyone. It’s always a treat to come together with a great group of underachievers.
Grant has already done a lot of work on behavior change.
GRANT: Most behavior change is actually not desirable. That’s one of the major things that stands in our way. The thing that we’re trying to convince ourselves or others to do is not actually something that we want to do or that they want to do. That got me wondering: instead of highlighting all the benefits of changing for the self, what if we focus more on the benefits to others?
Instead of personal benefits, what if we highlighted prosocial benefits of behavior change?
GRANT: I’m curious about whether, if we educate people about behavior change and the underlying processes that drive it, is it then easier to change their attitudes? It’s an open question, but I hope this group is able to figure it out.
Wendy Wood, a professor of psychology and business at the University of Southern California, studies habit formation.
Wendy WOOD: One of the things that initiated this conference is that the scientific field is really good at some things and not so good at others. The things that we’re really good at right now is changing behavior in the short term. We’re also really good, I think, at changing people’s knowledge and beliefs. We’re not so good at changing long-term behavior.
How about an example?
WOOD: OK. One is the five-a-day fruits and veggies. Anyone remember this? This was really successful in one way. It was a tremendously large-scale intervention. It was successful at changing our knowledge. We now know that we should eat more fruits and vegetables. It had no effect on behavior. In fact, consumption has gone down since the program started.
Yikes! The challenge of changing behavior long-term was echoed by Todd Rogers, a behavioral scientist at Harvard.
Todd ROGERS: Most treatment effects don’t persist. Sometimes they do and when they do we have no idea why. It’s hard to predict which will persist and which won’t.
The economist David Laibson, also at Harvard, used his speed talk to cover a particular problem he’s identified in his classroom: the use and abuse of laptop computers.
LAIBSON: There’s a huge negative externality for other students in the class. You’re sitting there and the person next to you is clattering away. You’re distracted by the sound, you’re occasionally looking at their screen, and then it makes you want to look at Facebook too. There’s all sorts of problems like that.
It’s a classic short-term versus long-term dilemma:
LAIBSON: The web offers instant gratification that undermines our very good intentions to get the most out of class, and that’s all about present bias. We go into the classroom and we are convinced, “I am going to be a good student.” Suddenly, other things become very appealing and very tempting. We’re distracted by those other very gratifying opportunities. Suddenly, we’ve lost 45 minutes of the 50-minute lecture.
Laibson sees this as a small-ish problem with potentially huge ramifications.
LAIBSON: We’re spending hundreds of billions of dollars in colleges and we’re not getting much value for our money.
So, what are the possible solutions to the laptop dilemma?
LAIBSON: We could have a laissez-faire policy. Students are adults when they reach college age. Let them decide. We could have an educational intervention. We could explain all these issues. We could ban laptops. I’ve thought about all these. I don’t really love any of these options. Let me offer a different alternative, one that we could actually as a group test or think about testing.
And what is this alternative?
LAIBSON: In my class, at Harvard, we have an opt-in laptop policy.
I caught up with Laibson afterward to hear some more about this.
DUBNER: David, you were talking about a project of yours, which prompted, for me, many questions. I have to say, I recorded said questions on my laptop, so I proved the value of my laptop right there. But can you talk about what you were describing and then where you want to go with that?
LAIBSON: I love the point that the laptop was good for you. For a lot of people in a lecture hall, it’s actually a distraction. For some people in a lecture hall, it’s exactly what they need to take notes, to look up related information. It complements their experience rather than destroying it. The problem is, how do we separate the wheat from the chaff?
DUBNER: Could you give like a 60-second summary of your pilot study?
LAIBSON: We have two sections in class. One section is for people who don’t want to use a laptop and don’t want to be around others using a laptop. Then we have another section which is the laptop section. Our view is that different students should choose one or the other section. Our concern is that if we just let students, in real time, make the choice — sitting down, “What do you want to do right now?” — a lot of people would flip open their laptop because the temptation is overwhelming.
What we do is tell our students at the start of this semester, “It’s up to you. Tell us if you want to be in the laptop section and we’ll assign you to that section. There’s a deadline for making that decision and once you make the decision, it’s final.” For everyone else who doesn’t opt into the laptop section, they’re defaulted into this no-laptop section. What we find is that about 80 percent of our students stay with the default of being in a non-laptop section.
When we survey our students at the end of the year and say, “Did this policy of having these two sections facilitate your learning?” On a zero-to-ten scale, the average rating is a little over 8. I think it’s about letting people choose for themselves, but letting them choose in a deliberative, thoughtful, careful way at the start of the semester. Then, once they’ve committed to one path or the other, letting that decision have its consequences.
DUBNER: You want to replicate or enlarge this exact study, yes?
LAIBSON: Yeah. Right now, we’ve got an anecdote. It should be replicated across dozens of courses and there should be much more careful efforts to actually measure whether it’s affecting learning, whether students value this, or whether students feel that this is inappropriate paternalistic behavior on our part.
And so it went for the rest of Day One of Behavior Change for Good. Lots of spitballing about methodology, potential research ideas, and more. A few quick observations: I heard more than I would have thought I’d hear about shifting the theoretical framework of decision-making — and less than I would’ve thought about basic incentives like gamification. But, let’s be realistic: these are a bunch of top-tier academic researchers; theoretical frameworks is how they got to where they are!
I also heard less than I would’ve thought about one of the inherent challenges in all behavior-change research: that the people most responsive to behavioral nudges are often the ones who already have a pretty decent track record with self-discipline and delayed gratification. It’s sort of the behavioral equivalent of pharmaceutical trials using the least-sick patients they can find.
And one more thing: every conference I’ve ever been to gets behind schedule. It’s just the way it is. Somehow, I thought this one would be different. I thought that a bunch of people trying to teach the rest of us how to, say, use our time wisely, that they might have some magical time-management tricks. But they didn’t. Which proves, if nothing else, that these behavioralist wizards are, like us, human.
Coming up after the break: day two of the conference drilling down into some experimental ideas, and a visit from Nobel laureate Danny Kahneman who offers encouragement, caution, and some inside tricks.
* * *
Day Two of the Behavior Change for Good conference began with breakout sessions. Researchers from different fields talking about designing smart experiments to help people stay fit, eat better, get out of debt, and so on.
DUCKWORTH: My team focused on getting high school students to study more for the SAT. We went from the mundane to the sublime.
Here is Duckworth and David Yeager, a psychologist at the University of Texas, Austin.
David YEAGER: Maybe if you capitalize in some wave of motivation to study, like, right after the PSAT. Get it at a time —
DUCKWORTH: When they’re not conflicted between —
YEAGER: When there’s not an academic conflict.
YAEGER: One thing I’m seeing emerging is to not fight the tendency to be performance-oriented in SAT prep but invite people to be mastery-oriented in their preparation skills.
DUCKWORTH: Is it possible to do a random assignment experiment delivered through Qualtrics and through texting to half of the kids and not the other half?
YAEGER: That is possible here and then that would give you a model for thinking about the content of the text messages, right?
I caught up with Duckworth afterwards.
DUCKWORTH: We spent a lot of time figuring out, “How many kids? Are they kids from disadvantage…” It was very tactical. But we ended with the sublime, which is, “What is the rite of passage to adulthood in America?” There really isn’t one. Could we use this challenge as a way of reframing the transition from high school to college in a way that would actually give kids an “on ramp” to that? As opposed to dropping them into this thing called young adulthood on a college campus somewhere where they have no support.
DUBNER: OK. What happens next?
DUCKWORTH: The next step for our group is that we will prepare a random-assignment study for high-school seniors who are taking the SAT this October.
And what was Katy Milkman’s breakout session about?
MILKMAN: My team was diving into an actual study design for a random-assignment trial, trying to help people exercise more regularly.
Lauren ESKREIS-WINKLER: Ayelet, I know you have some studies where you make it fun by adding music. Why are they finding fun by doing an exercise? Couldn’t they just bring music or —
MILKMAN: Yeah, that’s a good point. We could could give them strategies for making workouts fun.
ESKRELS-WINKLER: Other ways to make it fun.
FISHBACH: They have earbuds, right?
MILKMAN: I’m trying to think with a treatment, but you could imagine we tell them to try these different exercises. We could tell them to try doing them while listening to music or audiobooks or watching TV shows.
FISHBACH: We could, yes. The thing that we will find there is that you can watch a TV show more easily, maybe, while biking than running.
MILKMAN: Right. Probably also not when swimming.
FISHBACH: Not a TV show, but the music.
Edward CHANG: You can listen to music while swimming.
MILKMAN: You can?
And I chatted with Milkman after her session.
DUBNER: What about making the activity itself more fun, or less intimidating? I think one big barrier for people who don’t know their way around a gym is like, “Oh, these people with all these machines. They know how to move them, I don’t…” So, how do you make it more fun for someone who’s not acclimated?
MILKMAN: We actually decided that our design should have two elements: one element is focusing people either on the fun or on what’s most effective. That would be one thing we’d test, A versus B. The second element would be rewarding people for exploring different activities at the gym and then reporting back to us and seeing if having them go through that coaching process could help them zoom in on what’s right for them in a way that would lead them to form a more sustainable exercise habit.
DUBNER: Great. What happens next?
MILKMAN: What happens next is we go back and forth a lot of times on the draft (which we literally put together in the last hour and a half) of materials. Then, we do a heck of a lot of piloting to make sure that our questions all make sense. Then, we do more piloting, and then we launch.
DUBNER: You[‘ll] launch with how large a population?
MILKMAN: We’ll launch with thousands of people and our aimed-for launch date is January of 2018, when people will be benefiting from fresh-start feelings and New Year’s resolutions and eager to sign up for a program that helps them exercise more.
DUBNER: The partner on this is who?
MILKMAN: We have two partners on this: 24 Hour-Fitness and Blink Fitness.
As a result of the work on this day, and in the weeks to follow, the Behavior Change for Good project would line up a great number of pilot studies that will kick off in early 2018. They would also come up with a few million more dollars to fund their project, and they’d add some other big names to their roster of research scientists. For this first summit, meanwhile, the highlight was undeniable: it was a long chat and Q&A with the psychologist Danny Kahneman.
DUCKWORTH: Danny Kahneman is not only the elder statesmen of behavioral science, but he’s also our Beyoncé. We were just overwhelmed with joy and gratitude when he said, “Yeah, I will come and close out this session.”
Kahneman’s moderator was Max Bazerman, himself a most distinguished scholar. He teaches business administration at Harvard and has written many landmark pieces on decision-making, ethics, and negotiation.
Max BAZERMAN: By the way, I’m nervous. Somehow, interviewing Danny makes me nervous.
KAHNEMAN: Now you’re making me nervous.
Bazerman began with a brief overview of Kahneman’s research, most of it done with his late collaborator Amos Tversky. The two of them were the subject of Michael Lewis’s book The Undoing Project, which Lewis discussed on this program in an episode called “The Men Who Started a Thinking Revolution.”
LEWIS in a previous Freakonomics Radio episode: It is incredible to me how many different spheres of human existence these guys’ work has touched and influenced.
Now, Max Bazerman again:
BAZERMAN: I realize that most of the people in this room don’t remember the 1970s, when —
KAHNEMAN: At least not clearly.
BAZERMAN: At least not clearly. [Laughter] But there was a time when, after your ‘74 paper where psychologists became acutely aware of your work. Economists weren’t paying too much attention. Then, eventually, the behavioral economics movement starts. But throughout the last millennium, this was more of an academic literature.
In this millennium, we’ve seen this robust movement into the real world by groups that do research like the members here, by the Behavioral Insight teams. How do you explain the shift from academic to intense real-world interest?
KAHNEMAN: Behavioral economics, as it currently exists, started in a bar.
That’s right — behavioral economics started in a bar. Danny Kahneman and Amos Tversky were having drinks with Eric Wanner, future president of the Russell Sage Foundation.
KAHNEMAN: And Eric said that he wanted to bring psychology and economics closer together. He wanted our advice as to how he should go about it. I remember telling him, “You shouldn’t spend any money on psychologists who want to influence economics. You should look for economists who might be interested in what psychology has to say.” Now, there was such an economist, and his name is Richard Thaler.
You’ve heard Richard Thaler before on this program too.
Richard THALER in a previous Freakonomics Radio episode: I’m a professor of economics and behavioral science at the University of Chicago Booth School of Business. I’ve never had a real job.
And it was Thaler, Kahneman says, who put the economics in behavioral economics.
KAHNEMAN: The very first grant that Eric Wanner gave — when he became president of the Russell Sage Foundation — was for Dick Thaler to spend a year with me in Vancouver. I was at the University of British Columbia at the time.
KAHNEMAN: Dick published “Anomalies” for years. They cast doubt on the basic rational-agent model systematically, without preaching. Just the facts. That had a huge impact. When you ask, “How did behavioral economics happen?” It’s an accident. Like all accidents. There was that meeting in a bar, then there was that year in Vancouver, and then there was Joe Stiglitz having an idea about “Anomalies.”
Now the conversation turned to the Behavior Change for Good project.
BAZERMAN: What most of the folks in this room have been talking about is how they get behavioral change that occurs to actually stick and last over time. Give us wisdom on this topic.
KAHNEMAN: I won’t give you wisdom. But I’ll cite the idea that, for me, is the best idea I ever heard in psychology. I heard it as an undergraduate. It’s the story of how you induce people to change their behavior, as taught by Kurt Lewin. Now, he is my intellectual grandfather.
Kurt Lewin was a German-American psychologist who in the early 20th century developed several ideas that became central to modern psychology. Among them: that people’s behavior is strongly driven by two main external forces.
KAHNEMAN: There are driving forces that drive you in a particular direction. There are restraining forces. Which are preventing you from going there. The notion that Lewin offers is that behavior is an equilibrium between the driving and the restraining forces. You can see that the speed at which you drive, for example, is an equilibrium. When you are rushing some place, you feel tired, or you’re worried about police. There is an equilibrium speed. A lot of things can be described as an equilibrium between driving and restraining forces. Lewin’s insight was that if you want to achieve change in behavior, there is one good way to do it and one bad way to do it. The good way to do it is by diminishing the restraining forces, not by increasing the driving forces. That turns out to be profoundly non-intuitive.
In most cases, Kahneman explained, we try to change people’s behavior with a mish-mash of arguments, incentives, and threats.
KAHNEMAN: Diminishing the restraining forces is a completely different kind of activity, because instead of asking, “How can I get him or her to do it?” it starts with a question of, “Why isn’t she doing it already?” Very different question. “Why not?” Then you go one by one systematically, and you ask, “What can I do to make it easier for that person to move?”
It turns out that the way to make things easier is almost always by controlling the individual’s environment, broadly speaking. By just making it easier. Is there an incentive that work against it? Let’s change the incentives. If there is social pressure, of if there is somebody who is against it, I want to influence B. But there is A in the background, and it’s actually A who is a restraining force on B. Let’s work on A, not on B. I have never heard a psychological idea that impressed me quite as much as this one, perhaps because I was at an impressionable age.
The floor had by now opened up to questions for Kahneman, and I took my shot.
DUBNER: This is kind of a primordial question, but is it just part of human instinct — the assumption that driving works better than restraining? Is it a little dictator complex that we all have — not only about ourselves, but others?
KAHNEMAN: Well, it seems to be that it’s a natural thing to do. That is, when you want to move an object, you move it. When you want to move somebody, you try to move them. But the idea of looking at the situation from that individual’s point of view, which is the only way that you can find restraining forces, that is really not very natural. It is primordial. It is very basic that when we want things to move, we move them.
MILKMAN: One of the things I was thinking about is the risk of over-promising. Every few years, I hear people worrying about, “Have we over-promised what psychology can contribute to policy and are people expecting too much from things like The Nudge Unit relative to what they can deliver?” I’m curious, as we embark on this massive adventure, how you think about those risks and managing expectations while doing this in a very public way?
KAHNEMAN: There is a real social problem that if you realistically present to people what can be achieved in solving a problem, they will find that completely uninteresting. You have to over-promise in order to get anything done. That really is part of it. You take a problem like poverty. President Johnson was about to solve the problem and if — at the realistic objective, which is to reduce this by 12% and to increase that by 5% then, and so on. People would’ve said, “That’s trivial. We want to solve the problem.” Over-promising is part of the game, you know? You can’t get anywhere without some degree of over-promising.
LAIBSON: David Laibson.
KAHNEMAN: I know you.
LAIBSON: I agree that over-promising has the virtue that accelerates the initial effort. But it has the cost that it undermines the ongoing effort. In light of all the work you’ve done explaining the psychological biases — like the planning fallacy and other biases, that lead us to over-promise, not because we’re doing it as a rational sophisticated strategy, but rather as a psychological error — I’m surprised to hear you saying today that over-promising is a wise strategy. I would have thought —
KAHNEMAN: I never used the word “wise strategy.”
LAIBSON: Ah. But you said it was necessary.
KAHNEMAN: I was saying it’s very unlikely to happen otherwise. When you look at big successes, the people who carried out those big successes were unreasonably optimistic, typically.
LAIBSON: But are you recommending that we over-promise? Or are you saying it just happens to be a coincident —
KAHNEMAN: I’m just saying you are probably going to over-promise, for a lot of good reasons.
LAIBSON: Okay, that I agree with.
KAHNEMAN: I wouldn’t fight you on this. That’s not the worst thing that I happen I think, because it may be necessary to get the resources. It may be necessary to get the initial enthusiasm that is needed to do anything at all. There is so much inertia that realistic promises are at a major disadvantage. They’re at the major disadvantage because everybody else is over-promising.
After Kahneman’s talk, Freakonomics Radio producer Greg Rosalsky and I caught up with Kahneman in the hallway.
ROSALSKY: Obviously, the people who are organizing this conference, they’re very dedicated. They’re disciplined. A lot of the experiments they’re designing are on the general public. Do you have any lessons when thinking about incentives, how to design incentives or experiments or interventions, in general, when there’s a mismatch between the people who are designing incentives and the people who the incentives are for?
KAHNEMAN: Well, I think that people who cannot identify with their subjects have no business doing interventions or experiments. I have very little sympathy for those. You’re using the word ‘incentives’ more often than I would. Incentives are really only part of the story.
ROSALSKY: ‘Intervention,’ maybe, would be a better [term].
KAHNEMAN: Yeah. What we have to get used to is that we’ll design the interventions as powerful as we can make them and then they will have small effects. In some ways, people [who] do this should be aware ahead of time that, “Yes, we hope it’s going to have a practically significant effect and it will not solve the problem.”
ROSALSKY: Early on in your career you were doing a lot of real-world applications of your theories, in the Israeli military and in other places. In that environment versus doing it cloistered away in academia, do you think there are any lessons that could be learned for this project?
KAHNEMAN: Well, I was always interested in the real world. I never saw a real difference. All the effects that I studied, certainly in the domain of judgment and decision-making, were real effects that I expected would replicate in the real world. They were based on personal experiences. So, I find this completely natural.
DUBNER: Let me just ask one last question: I don’t know how much you know about Angela and Katy’s project. It’s extremely ambitious. They have partners from big banks to big educational institutions, fitness firms. They have access to thousands, maybe millions of customers. They’re bringing together all these researchers to try to come up with interventions. It’s hugely ambitious. There are many layers to get it to success. Talk just for a minute about what you think are the odds that it will work or what dimensions it may work on or where they might be frustrated?
KAHNEMAN: What you can hope for is what is called practically significant improvement, which is usually a few percent. If you get a few percent at relatively low cost, that’s a success. But, naturally, you have to want more and you have to settle for what you get. The fact that they are working on a large scale is hugely important. That’s a new departure and that very fact, especially if they’re successful — if they fail, that’s going to be quite costly for a long time. But I think that they will have at least partial success. The ideas are good. They are good. Something good will happen.
“Something good will happen”? Maybe. On the other hand, he also says: “If they fail, that’s going to be quite costly for a long time.” I asked Angela Duckworth whether Danny Kahneman’s assessment scared her off. After all, she’s devoting most of her professional life for the next couple years to this project.
DUCKWORTH: My favorite poem is Rudyard Kipling, “If.” The second line is, “If you can trust yourself when all men doubt you, but make allowance for their doubting too.” In other words, “Do what you’re going to do. But when someone says there are 19 problems you haven’t thought of, write them all down and then solve them.”
DUBNER: Thank you for giving us an ending to the episode. That was very nice of you, Angela.
We will be keeping tabs on the Behavior Change for Good project, and we’ll let you hear about it in future episodes. Meanwhile, coming up next time on Freakonomics Radio: I’m guessing you’ve heard about the concern that companies like Google, Facebook, and Amazon have gotten too powerful for the public good:
Barry LYNN: They have developed the capacity to manipulate us, to control us, to control the information that is delivered to us.
We’ll get into that primary issue, but also the secondary issue of how firms like these are shaping public opinion by spending lavishly on think tanks and foundations.
Franklin FOER: That’s become relevant because they fired a vociferous critic of Google from the foundation.
With a controversy like this, there are bound to be differing views:
Anne-Marie SLAUGHTER: We do not pay to play. We take funding and we do our work, and those two things are separate.
How philanthropic is this kind of philanthropy?
Robert REICH: I don’t think philanthropists deserve that amount of charity. Power deserves scrutiny in a democratic society, not gratitude.
The hidden side of corporate philanthropy. That’s next time, on Freakonomics Radio.
* * *
Freakonomics Radio is produced by WNYC Studios and Dubner Productions. This episode was produced by Greg Rosalsky with help from Harry Huggins. Our staff also includes Alison Hockenberry, Merritt Jacob, Stephanie Tam, Eliza Lambert, Emma Morgenstern, and Brian Gutierrez; we had help this week from Sam Bair. Special thanks to Laura Zarrow, Kelly Hughes, Valorie Nash, Octavian Busuioc, Vivian William, and Tanya Gulati for helping us with the conference. The music you hear throughout the episode was composed by Luis Guerra. You can subscribe to Freakonomics Radio on Apple Podcasts, Stitcher, or wherever you get your podcasts. You can also find us on Twitter, Facebook, or via email at firstname.lastname@example.org.
- Daniel Kahneman, professor of psychology at Princeton University.
- Angela Duckworth, professor of psychology at the University of Pennsylvania; founder and CEO of Character Lab.
- Katherine Milkman, associate professor of operations, information and decisions at the Wharton School of the University of Pennsylvania.
- Max Bazerman, professor of business administration at Harvard Business School.
- Adam Grant, professor of management and psychology at the University of Pennsylvania.
- David Laibson, professor of economics at Harvard University.
- David Asch, professor of medical ethics and health policy at the Wharton School of the University of Pennsylvania.
- Todd Rogers, associate professor of public policy at Harvard University.
- Wendy Wood, professor of psychology and business at the University of Southern California.
- David Yeager, associate professor of psychology at The University of Texas at Austin.
- Bridget Terry Long, professor of education and economics at Harvard University.
- Edward Chang, doctoral candidate at the Wharton School of the University of Pennsylvania.
- Lauren Eskreis-Winkler, postdoctoral fellow of psychology at the University of Pennsylvania.
- “Could Solving This One Problem Solve All the Others?” Freakonomics Radio (April 5, 2017).
- “Big Returns from Thinking Small,” Freakonomics Radio (March 29, 2017).
- “The Men Who Started a Thinking Revolution, Freakonomics Radio (January 4, 2017).
- “Should We Really Behave Like Economists Say We Do?” Freakonomics Radio (June 4, 2015).
- The Undoing Project by Michael Lewis (W. W. Norton & Company, 2016).
- “The White House Gets Into the Nudge Business,” Freakonomics Radio (November 2, 2016).