How Goes the Behavior-Change Revolution? (Ep. 382)

Download Episode
Listen now:

What do college students, prisoners, and geniuses have in common? Every group reports they often fail to do something they’d like to do because they’re worried about being judged. (Photo: Pxhere)

An all-star team of behavioral scientists discovers that humans are stubborn (and lazy, and sometimes dumber than dogs). We also hear about binge drinking, humblebragging, and regrets. Recorded live in Philadelphia with guests including Richard Thaler, Angela Duckworth, Katy Milkman, and Tom Gilovich.

Listen and subscribe to our podcast at Apple Podcasts, Stitcher, or elsewhere. Below is a transcript of the episode, edited for readability.

For more information on the people and ideas in the episode, see the links at the bottom of this post.

*     *     *

ANNOUNCER: Ladies and gentlemen. Please welcome the host of Freakonomics Radio, Stephen Dubner.

Stephen J. DUBNER: Thank you so much. This is a very special episode of Freakonomics Radio. It’s about one of my favorite topics. And based on the feedback we’ve gotten, it’s one of your favorites too. It’s about behavior change. So a couple years ago we first interviewed two researchers from the University of Pennsylvania, Angela Duckworth and Katy Milkman. They had launched an audacious new project called Behavior Change for Good, gathering together a dream team of behavioral scientists from all over the world. It’s their attempt to advance the science of behavior change and help more people make good decisions about personal finance, health, and education.

Tonight we are recording live at the Merriam Theater in Philadelphia, just down the street from the University of Pennsylvania. We’ll be hearing brief presentations from four behavioral-science researchers about their latest work. Later on we’ll hear from a Nobel laureate who helped create this field. But let’s start at the beginning by getting caught up on the Behavior Change for Good project with its founders. Would you please join me in welcoming Angela Duckworth and Katy Milkman. Angela, Katy, so nice to have you here.

Angela DUCKWORTH: Hi.

Katy MILKMAN: Hi.

DUBNER: So it’s been a few years now since you started this project. At the time, Katy, here’s what you told us: “We both thought the biggest problem in the world that needed solving was figuring out how to make behavior change stick.” So my first question is: have you solved that problem yet?

MILKMAN: Well, we learned a ton in the last three years but we have not solved this problem. Today we had a really fabulous gathering, where we shared the results of some of our first ambitious studies to try to make a major dent in this. And I would say the hashtag from the day was “Science is hard.” We ran a massive randomized controlled trial, so big old experiment. Sixty-three thousand members of 24-Hour Fitness gyms, which is one of the biggest gym chains in the U.S., signed up to be part of a really cool behavior-change program that we offered them for free. And it was designed by a team of brilliant scientists who we had brought together.

DUBNER: Now just to be clear: you’re recruiting people who’ve already gone to the trouble, and the commitment, of joining a gym, yes?

MILKMAN: Exactly. So you’re a member of 24-Hour Fitness and you hear all these cool scientists built a program and that I can sign up for free, it’ll help me exercise more.

DUBNER: And what exactly are you trying to get them to do?

MILKMAN: We tell them it’s a 28-day program, and the goal is to get you to build a lasting exercise habit, ideally forever. That was our goal. Let’s make all these habits stick.

DUBNER: So the idea is you get people to sign up, you give them encouragement and incentives? Or there’s some cash rewards or—?

MILKMAN: Yes. There was cash promised and delivered. So we were paying, order of magnitude, like a quarter for every gym visit. Better than nothing, but not a lot.

DUBNER: Not really, but ok.

MILKMAN: And we also said we’ll give you different kinds of messaging and reinforcement.

DUBNER: Okay. So how amazingly, beautifully, perfectly well did it work?

MILKMAN: So you want the good news or the bad news first?

DUBNER: Let’s start with an overview. Would you call it a failure or an abysmal failure?

MILKMAN: I’m going failure rather than abysmal failure. We learned a lot. The good news is 52 out of the 53 things that we tested we thought would improve gym attendance. One of our 53 experimental programs was supposed to be nothing. People signed up, we’re like, “Thanks for signing up. Good luck with your life.” That was sort of our comparison set. The other 52, everybody in those 52 conditions went to the gym more.

DUBNER: That sounds nothing like a failure to me.

MILKMAN: Okay. Here comes the failure. So we were actually trying to test new scientific insights, and all of the programs that we built on top of a baseline thing that we thought would work. Which was reminding people to go to the gym, paying them a little bit to go to the gym, and having them make a plan for the dates and times when they wanted to go. Then the reminders come at those times.

We were hoping to improve upon the performance of that. And nothing did. So basically what we found is that a set of ingredients we were already quite confident would work, they did. And then when we layered new stuff on, that we thought, “This is a sexy, new idea, it’s going to beat the best practice,” we got nowhere.

DUBNER: Okay, I seem to recall that part of this project was asking all your fellow researchers when they design experiments to make a prediction of how well their experiment would work. And these are some of the best and brightest minds in behavioral sciences, so presumably their predictions are not terrible. Were they terrible?

DUCKWORTH: So what we learn was that our scientists are quite optimistic about behavior change, and on average they thought “Oh, about 40 percent likelihood that my experiment worked.” Whereas when the data come in, it’s — it’s close to zero.

DUBNER: It strikes me, knowing nothing about anything, that what you were trying to do as your first big project — getting people to go to the gym more on a lasting basis — is the opposite of low-hanging fruit.

MILKMAN: Oh, that’s interesting. Okay. I do think it’s worth mentioning again, we actually didn’t fail at getting people to go to the gym. During the 28-day program, most of the different versions of the program did create behavior change — so, 50 to 75 percent created significant boosts in exercise for 28 days. It’s just that we didn’t do very well creating lasting change.

So after our 28-day program, pretty much we saw nothing in terms of behavior change. All 53 versions of the program, pretty much nothing sticks. And that was the ultimate goal. So that was a major failure.

DUBNER: So I know both of you fairly well by now and I know that neither of you are short on enthusiasm. So I don’t see you packing up and quitting, and disbanding the Behavior Change for Good project. What are your next steps?

MILKMAN: Okay, a couple of things. First of all, we’re doing more with this gym data. We’re going to swing a bat at it instead of the feather approach next time. We’re also going to do medication-adherence work. We think we can make a dent there given some of the science that’s preceded us. We’re going to do some work on childhood obesity in the U.K., which we’re really excited about.

DUBNER: Here’s something that you, Angela, said when this project was starting. “The one problem that really confronts humanity in the 21st century is humanity itself.” Being that we do a lot of things that are not so good for us. Nutrition, smoking, not saving enough for retirement, etc.

After that episode, one listener wrote in to say this, “This was the most depressing episode ever. People are a mess. That’s what makes humanity beautiful. Taking away our spontaneity, our whimsy, our impulses, and replacing them with only logical thinking is truly a dastardly idea. Some of the greatest things mankind has ever done weren’t for our overall well-being. They were done just because they were fun. You are killing the fun.”

I don’t have a question. I just wanted to read that into the record. No, I do have a question. So for the sake of argument, what makes you think that behavioral scientists like yourselves should be nudging or even shoving people to change their behavior when they might like their behavior just fine?

DUCKWORTH: If you think of the most self-controlled person you know, you might think, “Wow they have no fun. They never go out to, say, Freakonomics Radio Live. They only drink water. They work all day and they have no play. And that’s no way to live. But in fact there is research on the extremes of self-control, and there is no data that show that really self-controlled people are any less happy.

Self-control is the ability to align your behavior with what you want. If what you want is a life of spontaneity and ice cream cones, then that’s the behavior that you have to align to. That’s the goal. But the kinds of problems that Behavior Change for Good is working on — exercise; for teenagers, studying; for those people who have had a heart attack, taking your medications. These are things that most people actually value as goals and they simply interfere with other things that we could do — not taking our medication, hanging out on Snapchat all day, not going to the gym and binge-watching Game of Thrones instead at home on the couch. These are all temptations that are just more pleasing in the moment, but we later regret. So you can write back to this cranky listener, they’re misunderstanding what it really means to actually have a lot of self-control.

DUBNER: Well, I will say this: despite your struggles so far, I know that you two are super-gritty people and that you’re going to keep at it. And I really look forward to hearing the results down the road. So can we say thank you so much to Angela Duckworth and Katy Milkman. Now it’s time to hear from four members of the dream team of behavioral scientists that Katy and Angela have assembled for Behavior Change for Good; they’re all doing work that somehow relates to decision-making or cognition or human fallibility. First up is a Ph.D. psychologist who teaches at the Harvard Business School. Would you please welcome Mike NortonSo, Mike I understand you’ve been doing research on how people split the check when they go out for dinner and what that may say about our behaviors. Can you tell us about that please?

Mike NORTON: Can I ask you? So, out with a bunch of friends, drinks, and appetizers, salads, meal, dessert, check comes. What do you do? Do you say, “Let’s just split it and all put in our credit cards?” Or are you the guy who takes the check and calculates everything and says, “Well, I only had six croutons so let me — I’m just going to pay this much”?

DUBNER: You are asking me what I do?

NORTON: I am.

DUBNER: Personally?

NORTON: If you’re comfortable admitting it.

DUBNER: Sure, yes. So, I am definitely not a counter, so I wouldn’t do that. But I will say this. If I’m going to a dinner where I think it’s a split dinner, where we’re all contributing, I will not skimp. Let me put it that way. Because I figure if I’m getting an eighth of it, I want my steak and I want my ice cream sundae. I’m actually getting it at a little bit of a discount because I figure some other people aren’t. So I’m getting 20 percent off the steak. So what does that make me?

NORTON: It feels like it’s working for you, but if we asked your friends and family they might— So we actually find that there’s sort of two kinds of people. A lot of people either say “When the check comes, maybe you had more, maybe I had more, let’s just split it.” And then there’s another group of people. It’s typically 30 percent of people actually, who no matter what— I mean it could be a $3.08 meal, and they’ll still take the check and figure out who had what and make sure that they split it exactly.

DUBNER: Okay. So I want to know about this research — how you do it and who the people are.

NORTON: So we can do really, really simple experiments where we can say, “Look at this person’s Venmo account and see the payments they made.”

DUBNER: And Venmo is a payment app, we should say, correct?

NORTON: A payment app. And what it does, which is brilliant, is it automatically splits things for you. So it’s great. It means if we go out for dinner, and it’s 20 dollars and two cents, it actually will make each of us pay ten dollars and one cent. And we can just show you, for example, one person made a payment of $10.01 to some friend, and another person made a payment of $9.99 to another friend.

And then, in the other version, you see someone who paid ten dollars to one friend and ten dollars to another friend. If I did that right they both added up to $20. So it’s not a different amount of money. So, everything’s the same. Your friend paid you back. It’s $20. And we said, “Who do you like? How do you feel about this person?” And—

DUBNER: Sorry, how do you, a disinterested observer—

NORTON: Here’s two people you don’t even know them, it’s not even a friend of yours, it’s just these two people. How do you feel about them? The $10/$10 person they say “Yeah, it seems like a nice guy.” And the $10.01 person and the $9.99 person they say, “I don’t like them.”

DUBNER: Either or one of them?

NORTON: Yeah, yeah. Don’t like them.

DUBNER: Is there more dislike for the one that does the $9.99 or no?

NORTON: Only slightly. So actually, one thing that we tried to compare to is generosity. So think, who do you like better? Someone who pays you back $10 or someone who pays you back $10.03? So technically the $10.03 person is more generous. But they’re also really weird about money and really petty. And in fact that’s how much we dislike this behavior, is we like the person more who paid us less. As long as they weren’t petty about it.

DUBNER: Also pennies are a pain in the neck. Let’s be honest.

NORTON: I’ve never seen them, I don’t know.

DUBNER: Okay. So what have you identified in the wild? Is it pettiness? Is that what you’re studying?

NORTON: Yeah, so pettiness is attention to trivial details. That’s the way to think about it. So it can happen with time, it can happen with all sorts of currencies where there’s these people in our lives who really seem very interested in the little, tiny minutiae of life and they tend to drive us crazy. And again they’re not wrong. They’re doing the math correctly. There’s no problem with it on one level, but for many of us they really, really drive us crazy.

DUBNER: Do you know anything about how pettiness works in let’s say a romantic relationship?

NORTON: Personally you mean? Or just from the research?

DUBNER: I didn’t mean to imply it, but I see that I did. So —

NORTON: So we ask people in relationships about their partner, to rate them on all kinds of things. How generous are they, all sorts of things. And also how petty are they. By asking them “Is your partner the kind of person who splits things randomly or do they really care about dollars and cents?” The answers to that question really predict not only dissatisfaction in your relationship, but we asked, “How upset would you be if your relationship ended?” And people who are with a petty partner are less upset when they think about their relationship ending.

DUBNER: I see that you’ve written about what you call two different kinds of relationships: “exchange” relationships and “communal” relationships. Is that the idea?

NORTON: Exactly. So, classic exchange relationship is with our bank. So we’re not offended at all if our bank gets things down to the cent. In fact we’re really upset if they don’t. Because the whole point of a bank is they’re supposed to be really good at dollars and cents. If your bank said, “We’ll just round it up.” “What are you talking about, it’s my money!” So you’re not supposed to do it over there. And in fact that’s why we get so upset in communal relationships, because our friends are treating us like a bank. They’re treating us like we’re a merchant and we owe them money.

DUBNER: All right. So let’s say I find this pettiness effect interesting, and I do. Though perhaps not all that surprising. Beyond the handful of people involved in one of these dollars-and-cents transactions, what are the larger ramifications here?

NORTON: What technology does— actually it’s more efficient, it’s better, it’s an improvement, but it actually is starting to default all of us into the dollars-and-cents world. And there’s nothing, again, wrong with that, but it does mean that it can be eroding social capital. It’s actually good if I take you out for lunch and treat you because then later you might take me out for lunch and treat me. And now we have an ongoing relationship.

DUBNER: I understand, Mike, that you’ve also done research on humblebragging. Is that true?

NORTON: Yes.

DUBNER: I mean you may not want to admit it but — can you tell us in a nutshell what a humblebrag is and when it’s good and when it’s bad?

NORTON: Katy and Angela tend to study things that are making the world a better place. And I tend to study things that I find annoying. And in that way I’m changing the world as well.

There’s two kinds, actually. There’s complaintbragging and then there’s humblebragging. So complaintbragging, whenever someone online says, “Ugh,” right after that it’s going to be a complaintbrag. Just wait for it. It’s always a complaintbrag. So they say, “Ugh, wearing sweatpants and everyone’s still hitting on me.” One of my favorite ones ever was, “My hand is so sore from signing so many autographs.”

So humblebragging, usually people recycle from Wayne’s World for some reason, “Not worthy.” So that’s — and whenever you see that, that means that here comes a humblebrag. “Not worthy,” and then say, “So honored to be onstage with Katy Milkman and Angela Duckworth.” So what I’m really just doing is saying, “I’m onstage with really important people, but I’m acting all humble about it.”

The reason that people do these things, we can show in the research, is they’re feeling insecure. So I want to brag, always, because I want everyone to think I’m awesome. But I have the theory that if I brag, people won’t like me, because nobody likes a braggart. So we think what we can do is if we’re humble about it, then people will say, “Oh, what a nice guy. And also I learned that he knows celebrities.” And instead what people think is, “What a jerk.” So in fact we like braggarts, just straight-up braggarts, which is just saying, “I met a famous person.” We like them more than people who do this little strategy where they try to humblebrag.

DUBNER: Interesting. Mike Norton, thank you so much for joining us tonight. Would you please welcome our next guest, she is a professor of psychology and head of Silliman College at Yale. She recently designed and taught the most popular course in Yale’s history, called “Psychology and the Good Life.” Would you please welcome Laurie Santos. I understand that you — rather than wasting time working with humans, as all these other people have been doing — you’ve been doing behavioral research with, and this makes my heart pitter-patter so hard, dogs. Yes?

Laurie SANTOS: That’s right. They’re just more fun than people.

DUBNER: So, I know you used to do, or maybe still do, some research with capuchin monkeys as well. Which makes me curious why, as a psychologist, you find it so compelling to work with animals.

SANTOS: Yeah, it’s a niche field, the whole dog-cognition, monkey-cognition thing. But I’m actually very interested in human behavior. Which is why I got interested in animals. Humans are so weird. There’s no other species that has a live radio show talking about their own species’ behavior, using technology like this, and human language. And on the one hand that’s sort of goofy, but on the other hand it raises this deep question, which is, what is it that makes us so special?

DUBNER: And when you ask 20 scientists, let’s say, from across a broad range of scientists, you’ll get 20 answers of what makes humans unique, yes? What do you believe is the thing?

SANTOS: Yeah, I mean the top 10 are things like language, things like the fact that we can perspective-take, the fact that we can think about the future. We took a different take, though, which is all those answers tend to be stuff that makes us so smart — we’re special because we’re so smart. I actually worry a deeper thing might be that we have to worry not about the smart stuff, but we have to worry about some of the dumb stuff. We might be uniquely dumb in certain ways, or uniquely biased in certain ways. And we have to understand that if we really want to understand how human cognition works.

DUBNER: Is it possible that we are “wrong” so often as humans because we are so smart though? Because we think too much, think our way out of an obvious solution?

SANTOS: Yeah, that’s one possibility, is that some of the smart capacities we have might not be giving us the best answers all of the time. Take our future thinking, right? We get to think about all these other hypotheses and all these counterfactuals. And that gets us out of the present moment. That means we’re thinking about different kinds of things than we would be if we were just a monkey that was just taking it all in, in the moment. It’s sometimes our smarter capacities that end up making us look incredibly dumb.

DUBNER: Okay, so I want you to start by telling us how you do the dog experiments.

SANTOS: Yeah. We started with dogs in part because we built them to be like us. We, over this process of domestication, took a wolf, this wild canid, and said let me take a creature that can hang out with me, and therefore has cognitive abilities that can get along in human culture. And that means that we have a creature that’s ready to soak up our culture in lots of different ways. So if there’s anybody that’s going to be like us, any species that is likely to show our biases, dogs might really be one of those. That’s why we focus on them.

DUBNER: Are they test dogs? Are they regular dogs that you recruit?

SANTOS: Just like human subjects, we recruit them in the same way. So we put posters up and we say, “Do you want to bring your dog in for a study?”

DUBNER: What are you trying to get the dogs to think or do? And how does that compare to humans?

SANTOS: In one study we focused on a particular phenomenon that researchers call “overimitation,” which as you might guess is imitating too much. Here’s the phenomenon in humans. Imagine I show you some crazy puzzle box, you don’t know how it works. And I say, “I’m going to explain to you how it works.” I’m going to tap this thing on the top. I’m going to do all these steps and I open the puzzle box and I give it to you. If it was some hard-to-figure-out puzzle box, you might just copy me.

But imagine I give you a really easy puzzle box, just a completely transparent box. Nothing on it. It just had a door that you could open to get food out. But you watch me do all these crazy steps, I tap on the side, I spin it around a few times, I do all these things. You might hope that humans are smart enough to say, “That was a really dumb way to open the box. Give it to me, I’m going to open the door.”

But it turns out that’s not what humans do. Humans will follow slavishly all these dumb steps that they see someone else do, just in case. And we thought the same dumb copying behaviors that we see humans do, we should probably see in dogs as well.

Here’s how we set it up. We made a dog-friendly puzzle box, easy enough for the dogs to understand. So it was a transparent box with a lid that was really obvious, and if you flip the lid up you could get inside and get a piece of food. But we added this extraneous lever on the side of the box, and we showed dogs, “Hey here’s how you open it.” You have to move the lever back and forth, it takes a really long time, lever, lever, lever, lever, and then at that point you can open the box.

Now in theory if we did this with a human they would say, “I don’t really understand.” Then lever, lever, lever, lever, lever, lever, open the box. That’s actually what humans four-year-olds do, there’s some wonderful videos online where you can see this. And what do the dogs do? Ran over, lifted the lid, and got the food. What this is telling us is that we’ve created this species that learns from us a ton. They follow our cues all the time. But they’re actually smarter at learning from us than we are at learning from ourselves.

DUBNER: You mentioned a four-year-old human. Are you comparing the dogs to children or to adult humans?

SANTOS: Yeah, so this study we did was in direct comparison with a study that Frank Keil and Derek Lyons did at Yale University. They did this with four-year-old kids. And what they find is that four-year-old kids will slavishly imitate what they see even when you make the box so simple that a four-year-old could figure it out.

DUBNER: So you’re not saying that dogs are “smarter than humans.” You’re saying dogs are “smarter than four-year old humans.”

SANTOS: The cutest version of the study, is a four-year old study. But you can make the box slightly more complicated and find that adult humans overimitate just as much. And if you don’t believe me, have one of your pieces of technology on your TV go out and have someone come in and be like “Well, you’ve got to move this wire to the HDMI thing and whatever.” And you will have no causal understanding of it, but my guess is you will copy exactly what that person does.

DUBNER: How would you then characterize: dogs are more ____ than humans in this regard? Is it more rational? Is it less susceptible to bad advice?

SANTOS: It’s that dogs are more careful about the social behavior they pay attention to. We just automatically soak up what other individuals are doing, often without realizing it. And dogs can learn from us if they need to, but they don’t have to follow us. In some ways they’re more rational in terms of the social information that they pay attention to.

DUBNER: Okay, so let’s flip it. Rather than critique ourselves, which may be a singularly human trait as well for all I know, let’s see what can we take from your research insight, and apply it to this general notion of making behavior change happen.

SANTOS: Yeah. What we get from this is that we have to be really careful in domains where we’re watching the behavior of other people. And this is something that we’ve known in behavior change for a long time. Behavior change researchers have a phenomenon known as “social proof.” When you see other people doing it, you think it’s a good idea. Most of the time we think of social proof, we think of good things. But there are all these domains in which it seems to go awry.

Classic work in the field of social psychology by Bob Cialdini found that if you hear that a bunch of other people are doing a dastardly thing, without realizing it, you become more likely to do that dastardly thing, too. What we’re realizing is that that’s not necessarily that old a strategy. This might be something that’s human-unique. And that begs us to ask the question, “Okay, why is our species using that strategy?” Maybe it’s good for something in some contexts.

DUBNER: Laurie Santos, thank you so much for being on the show. Great job.

*     *     *

DUBNER: Welcome back to Freakonomics Radio, recording live tonight in Philadelphia, where we’re learning about the science of behavior change. Would you please welcome our next guest. He’s an economist at the University of Wisconsin School of Business. His research specialties include risk and decision-making and insurance markets. Tell me that doesn’t get you all giddy with excitement. Would you please welcome Justin SydnorOkay. Justin, I understand that you’ve done some interesting research on employers’, healthcare-plan options. Yes, is that about right?

Justin SYDNOR: Yes. We’ll see whether the audience agrees it’s interesting. So the backdrop here is that many of us now have choices to make about health insurance plans. And we’re all used to these horrible terms, like deductibles and co-pays and coinsurance. So I did a research project with a couple of co-authors, Saurabh Bhargava and George Loewenstein from Carnegie Mellon. And we got access to a company, a really big company, who decided to do something interesting. They embraced this idea that people should have control over their insurance. You should be able to decide, do you want a lot of insurance or not a lot of insurance? And there’s going to be different premiums tied to that. So they gave people an opportunity to select one of four deductible levels.

DUBNER: And what was the stated intention? Was the company saying to its employees, “We want to give you more options because you’re paying for the insurance, whether you know it or not.” It’s coming out of payroll essentially, right? Or was it the company essentially trying to profit-maximize?

SYDNOR: In this case they had a genuine belief that different employees would care more or less about how much insurance they had, and they were paying part of it through a premium share. They thought why should we dictate whether you have a high-deductible plan with lower premium or a low-deductible plan with a higher premium? So they can choose between different deductible levels, different co-pays, coinsurance, maximum out-of-pocket. So they can pull all these levers and they end up with 48 different possible combinations they could choose from.

DUBNER: Okay. And then you have the real data so you can see what people really choose and then can you see what they actually spend in the coming year?

SYDNOR: Yes. And we can calculate how much would they have spent with a different plan.

DUBNER: Now, to be fair, it’s a little bit of a gamble right? When you buy insurance, you don’t know how much of this you’re going to need to consume. How do you factor that in?

SYDNOR: Well, this is the truly fascinating thing about this case. You’re right. Most of the time, I couldn’t tell you whether you made a good choice or not in your health insurance because it’s going to depend. You might get lucky and it turns out you didn’t really need much insurance. But if you bought insurance I wouldn’t have said that was a mistake. But in this case it actually turned out that most of the plans were a deal that no economist should take. So most of the plans were such that you were going to pay more for sure for the year if you chose that plan. Doesn’t matter if you turn out to be healthy or unhealthy. You’re going to pay more.

DUBNER: How so? It’s just a higher premium and down-the-road, the payments are worse?

SYDNOR: So it’s really the higher premium part that matters. What happens is the plans that had a lower deductible — say I wanted $500 instead of $1,000 — to get that plan I had to pay more than $600 extra in premium for the year. So best-case scenario, I might save $500. I get more insurance, but for sure I already paid over $600 for that.

DUBNER: And you’re talking equivalent benefits in those two cases.

SYDNOR: Yep. You can go to the same doctors, everything’s covered, all the prices are the same. So it’s really an interesting laboratory where we can label something that looks, from at least our classic models, this looks like a financial mistake.

DUBNER: Now, most people don’t like insurance for a number of reasons, including the fact that it’s a little confusing and intimidating. How much of this mistake, as you seem to be labeling it, is just a function of the fact that it’s hard to figure out?

SYDNOR: So one possibility is that this is just choice overload. And if we gave them fewer options, they’d be able to select more rationally from that. Another possibility is that insurance is just really hard. And even if you’re looking at just a couple of options, it’s going to be very hard to tell the difference between them. And the third option is maybe there’s something going on where people just really genuinely are willing to pay more to avoid having these shocks of high deductibles even if they knew for sure.

DUBNER: What about affordability? Because especially for a low-income employee, a smaller amount upfront is attractive. Cash flow is an issue.

SYDNOR: Yep, so in many ways, they were sort of making the reverse option. So what was happening is that they were opting into paying higher premiums, for sure, every month. Now they were potentially protecting themselves a little bit at the very beginning of the year. But over the course of the year they were going to end up paying more money.

The first thing we did is we wanted to figure out, okay, is this the choice overload? Is it the weird thing of 48 plans? And we ran some online choice experiments where we tried to replicate this sort of thing. And what we found very quickly there is you get exactly the same patterns if you just give people four plans or two plans. So it’s really not about choice overload. It’s fundamentally that when people look at insurance, they can’t combine the premium and these out-of-pocket costs and make what looks like the rational math calculation.

DUBNER: Do you think that long ago some insurance company made the very sneaky, wise choice of calling the payment a premium, which sounds like a great thing?

SYDNOR: My general sense from studying insurance is that in the history of the insurance market, few people have made really wise choices. As evidenced by the fact that when you say you hate insurance, everyone in the room nods along.

DUBNER: Okay. So here’s what I’ve learned from you. We’re bad at buying insurance. We’re bad at buying insurance in part because the way it’s described makes it easy for us to be bad at it. Maybe some of the fault lies there. So the big question is, again, let’s flip it, what’s the good news here? How can you take this research insight and apply it to this notion of helping more people make better choices, whether it’s more people on an individual level or societally?

SYDNOR: So the good news is there are ways of making it a way easier. I can add it up, I can show people. And we’ve run some little experiments, and it looks like if you make it easier to compare the plans, you can really easily inform and improve these options.

But maybe the bigger implication is that we should just stop giving people choices about this. And the reason we should stop giving people choices about this is that the only really good reason to give people choices is that we think that they might want to sort into plans that are good for them, and have some bearing on their risk aversion.

DUBNER: But we’re really like four-year olds with a box that’s really hard to open, and we should just bring in the dogs and let them choose our insurance?

SYDNOR: Exactly. Let the dog choose your insurance.

DUBNER: Justin Sydnor, thank you so much. Great to have you here tonight. Let’s welcome our next guest. He is one of the most revered and prolific scholars in modern psychology. He helped identify all sorts of cognitive biases and illusions. He’s also the author of one of my favorite books ever in the world called How We Know What Isn’t So: The Fallibility of Human Reason in Everyday Life. Would you please welcome, from Cornell University, Tom GilovichTom Gilovich, I understand some of your latest research is on regret, which I’d love to hear more about. And — and really how it fits into your body of work.

Tom GILOVICH: Sure. There’s two types of mistakes we can make in life. Mistakes of action and mistakes of inaction. And therefore, two types of regrets. And the question is, what do people regret more — mistakes of action or mistakes of inaction?

An example that everyone in the audience can relate to: if you go back to your days as a student, you’re taking a multiple-choice test, question number 20, you check B. You’re going on, and then at question 24 you say, “Wait a minute. Backup. Go to question 20. I don’t think it’s B. I think it’s C.” Now you have a dilemma. Do you switch to C? You could make a mistake in doing that. Or you could stay with B. You could make a mistake doing that. Which mistake hurts more? We all recognize that if you switch from the right answer to the wrong answer, you’re going to regret that more.

DUBNER: So all the topics that we’ve been hearing about tonight, whether it’s going to the gym, buying insurance, the way we behave with other people in a social setting, you can imagine scenarios by the billion where you make a choice and regret it. By looking at regret as you have, have you started to learn anything yet about how to just think about optimizing our decisions right here and now?

GILOVICH: Sure. The other side of the regret story, you regret action more than inaction sometimes. But if you ask people what are your biggest regrets in life, they tend to report regrets of inaction. How do you reconcile those two? And the reconciliation is that you feel more immediate pain over the regret of action but, partly because it’s so painful, you do things about it. You think of it differently, and you’ve taken an action and one of the ways that you can come to grips with it is to say, “Well, it was a mistake, but I learned so much.” It’s hard to learn so much by not doing something new.

Over time, these painful regrets of action give way to more painful regrets of inaction. And what are the kinds of inactions that people have? And a great deal of them when we interviewed people, and we’ve interviewed, college students, prisoners in a state prison, a sample of geniuses. In group after group after group, a very frequent regret is one of not doing something because of a fear of social consequences. “What will people think?”

DUBNER: And that calls to mind some of your early research about the spotlight effect, right? Which is we tend to think that people really care about us much more than they do. Yeah?

GILOVICH: Yes. In fact that research we did right on the heels of the research on regret. As David Foster Wallace put it, “You won’t mind so much, how people judge you when you recognize how little they do.” And people often don’t do things that are in their interest because they’re afraid it — it would be embarrassing. I don’t want to go to the gym because I’d get on a treadmill next to someone who’s going a mile a minute and I can’t keep up with that, or I can’t lift those weights.

DUBNER: But let me ask you this: when you mentioned interviewing prisoners, what my mind jumps to is the obvious regret: “I regret doing the thing that turned me into a prisoner.”

GILOVICH: Yeah. They have slightly fewer regrets of inaction than the general population, but still the majority of theirs are regrets of inaction. Now, it’s not that they don’t regret the things that got them into prison, but the way they talk about them often focuses on, an inaction. “If only I’d done this, I wouldn’t have gone down that path.” “If only I’d convinced the lookout person to be on his toes.” So even they tend to focus on things that they didn’t do.

DUBNER: So Tom, I’m just curious — we’re on the subject — I really admire your work. I’ve admired it for years. I want to know your biggest regret.

GILOVICH: Okay, it’s easy. It’s a regret of inaction. I didn’t think of this until five years after I got married, and I recognized at that time, “I have a solution to the naming problem.” What name do you take? We live in a world where it’s sexist. A woman takes the man’s name. Other cultures, they combine them, but that only works for one generation. You can’t have a multiplying name. So what to do?

My regret is I didn’t think of it on the eve of my wedding, what I would like to have done, not told anyone about it: ceremony goes, and then at the very end I say, “Wait a minute. There’s one more thing. We’re going to flip a coin to decide what the last name is.” Because it’s fair. But what I like even more about it is that we don’t have anything, any cultural institutions that celebrate chance, and chance is a huge part of our life. I think it’s my best idea. And unfortunately, it came five years too late.

DUBNER: Tom Gilovich, thank you so much for being on the show tonight.

GILOVICH: Thank you.

DUBNER: So what have we learned tonight? We have learned that humans are regretful, although not necessarily in the right direction. We’re also not very good at buying insurance. We are dumber than dogs, and that’s not a humblebrag. That’s an actual thing. And we are really petty. To make sense of all this and maybe to give us a little hope, I’d like to introduce you to our final guest. He is a recent Nobel Prize recipient. Not the Peace Prize, I’m afraid. Not even, not even the literature prize. It’s just the prize in economics. I’m sorry. It’s the best we could do. So would you please welcome the University of Chicago economist Richard ThalerRichard Thaler, any day I get to talk to you is a great day. Thanks for being on the show.

Richard THALER: My pleasure.

DUBNER: So you are best known as a primary architect of what’s come to be called behavioral economics, also as co-author of the wonderful book Nudge: Improving Decisions About Health, Wealth, and Happiness and the resultant Nudge Movement. So let’s start with that: how would you describe a nudge?

THALER: So a nudge is some small — possibly small — feature of the environment that influences our choices but still allows us to do anything we want.

DUBNER: Okay. So I would argue that the most successful nudge, and the greatest triumph to date of behavioral economics, has been your work, done with Shlomo Benartzi a couple decades ago, in the realm of retirement savings. You argued that rather than relying on people to opt in to their 401(k), and fill out the 8,000 pages of paperwork and choose from a million investment options that confuse and intimidate people, that it’s better to just automatically enroll them. This has resulted in millions of people saving billions of dollars for their retirement.

So: congratulations, and thank you. But: what does it say about the field of behavioral economics, and behavior change generally, that this largest victory took place a couple decades ago? Where are all the other victories?

THALER: The retirement-saving initiative has been a success because we’ve been able to convince firms that organized retirement plans to make them much simpler. So the choice architecture is simpler. As you mentioned, people are automatically enrolled, so they don’t have to fill out any forms. Then their rates are automatically escalated, and they’re given a default investment fund. So it’s all easy.

It’s no accident that that was a success, because the fix was easy. Give me a problem where I can arrange things so that by doing nothing, people make the right choice, that’s an easy problem.

DUBNER: Well, one feature in this case at least, is that it’s a one-time fix. When Angela and Katy were talking earlier, about their efforts to get people to go to the gym during the treatment period and then to keep going afterwards, that’s having to win the battle every single day. Do you think that too many potential fixes are aimed at essentially unfixable behavior?

THALER: No. Because Katy and Angela have infinite energy, unlike me. And if they can solve the problem of getting people to go to the gym or eat less or take their medicines, I’m all for it. When they started this project my reaction was, “Ooh, this is hard.” And that the — the simple things may not work.

DUBNER: So this Behavior Change for Good project includes a lot of psychologists. What do economists know or have to offer that psychologists don’t? And if your economist ego allows you to say so, vice versa as well?

THALER: Oh, psychologists know a lot more about that than we do. Economists don’t know much about how people form habits or when they stick and when they break.

Let me give you an example that relates to what Mike Norton was talking about. I was in London, I was invited to some meeting that — they were trying to reduce binge drinking. And they asked me if I had any nudge-like ideas. You’re smiling because you think this is a matter of personal importance. But I suggest neither of us go there. In England there’s a tradition, a hallowed tradition, of buying rounds. The way it’s done at the pub is you go with your mates and I would buy the first round and you would buy the second round and until we each bought a round.

Now this has obvious problems if the number of people in the group is more than, say, three. What I suggested was that pubs institute a new policy, which is, for groups of more than three, they run a tab.

Well, this was supposed to be a private meeting, but it leaked to the press and I got hate mail. And people would say “I would never dream of leaving the pub without buying a round for my friends.” And they come with a group of eight. And this is nothing but trouble. I can think up that change in the choice architecture, but how you would get that to change?

Well, I made no progress. We’re human. We have self-control problems. We’re absent-minded. We get distracted. And those things aren’t going to go away. Technology is likely the best answer. Self-driving cars will drive better than us very soon.

DUBNER: I’ve heard you talk about the opposite of a nudge as sludge. Can you describe what sludge is and give an example?

THALER: Nudges typically work by making something easy, like automatically signing you up for the retirement plan. Sludge is the gunk that comes out as a byproduct. And I’m using it for stuff that slows you down in ways that make you worse off. So for example, suppose that there is a subscription and, they automatically renew your subscription. But to unsubscribe you have to call.

And I had this experience. The first review of my book Misbehaving: The Making of Behavioral Economics came out in the Times of London. My editor sent me an email excitedly telling me this and sending me the link. And I log on and there’s this paywall. And I said, “Oh, I can’t read it.” But there’s a trial subscription for one pound for a month. And I said, “Oh well, I’m willing to pay a pound to read the first review of my book.” But then I start reading the fine print and in order to quit, you have to call London, during London business hours, not on a toll-free line, and you have to give them two weeks’ notice. That is sludge.

DUBNER: So you’re still reading the Times of London, I assume.

THALER: I called my editor and told him that he should buy the subscription and then send me a PDF.

DUBNER: All right. I have a final question for you. You mentioned habit formation, which to me is at the root of just about everything we’ve been talking about tonight. And some habits get formed intentionally, others not. Some habits are good. Some are not. I’m really curious to know, what’s a habit that you never acquired that you really wish you had?

THALER: Doing my homework.

DUBNER: In school, you were not a homework doer?

THALER: I was not a great student.

DUBNER: Yeah, so how does this happen that a guy who is an admittedly not-very-good student, who apparently didn’t do homework, gets a Nobel Prize?

THALER: Well, listening to Tom, maybe it was I was less fearful of embarrassment than my colleagues. I mean, much of my career was similar to the kid who points out that the emperor is naked. And few of my economist colleagues were willing to say that, and I was willing to be ridiculed.

DUBNER: Where do you think that lack of embarrassment came from?

THALER: Possibly stupidity.

*     *     *

Freakonomics Radio is produced by Stitcher and Dubner Productions. This episode was created in partnership with WHYY and was produced by Zack Lapinski, Alison CraiglowGreg RippinHarry Huggins, and Corinne Wallace; our staff also includes Matt Hickey and our intern is Daphne Chen. Our theme song is “Mr. Fortune,” by the Hitchhikers; all the other music was composed by Luis Guerra. You can subscribe to Freakonomics Radio on Apple PodcastsStitcher, or wherever you get your podcasts.

Here’s where you can learn more about the people and ideas in this episode:

SOURCES

  • Angela Duckworth, University of Pennsylvania psychologist and author of Grit.
  • Katherine Milkman, professor of operations, information, and decisions at the University of Pennsylvania.
  • Michael Norton, psychologist and professor at the Harvard Business School.
  • Laurie Santos, professor of psychology at Yale University.
  • Justin Sydnor, professor in risk management and insurance at the Wisconsin School of Business.
  • Thomas Gilovich, professor of psychology at Cornell University.
  • Richard Thaler, Nobel laureate and professor of behavioral science and economics at the University of Chicago Booth School of Business.

RESOURCES

EXTRA