Search the Site

Episode Transcript

Stephen DUBNER: Thank you so much, and welcome to this special episode of Freakonomics Radio. We are recording live tonight in London, where nearly 10 years ago a quiet revolution began. It was headquartered at the very center of the U.K.’s central government, and it promoted something that you wouldn’t think would necessarily need promoting, which was government policy-making based on actual empirical evidence. Wouldn’t it make sense for governments to design policy based on such research rather than on opinion polls or personal whim or worse yet, the highest bidder?

This was the revolutionary idea behind the establishment, in 2010, of the Behavioral Insights Team, or as it’s more commonly called — sometimes with affection, sometimes ridicule — the Nudge Unit. Their mission was to translate social science research into simple and inexpensive policy ideas that would help collect taxes more efficiently, get the unemployed back to work faster, perhaps even increase happiness and well-being. And so tonight we have come to the epicenter of this revolution to learn its history, its progress and failures, and its future. Now, we should say the Nudge Movement has not been without its critics. It’s been accused of overreach, of arrogance, of naiveté. The entire enterprise calls for a certain amount of skepticism. And so, we have asked to join us tonight a professional skeptic. He’s also host of The Bugle podcast and a professional cricket commentator, would you please welcome the great Andy Zaltzman.

Andy ZALTZMAN: Hello. Thanks for having me.

DUBNER: Have you — do you have any thoughts on this nudge movement-slash-revolution?

ZALTZMAN: So, I don’t understand economics, and I sincerely hope I die that way. I’ll consider that a life very well lived if I get to the end of my life still unable to understand this mysterious witchcraft. Fundamentally, it’s witchcraft in a pinstriped suit, economics — it’s the art of telling people exactly what’s going to happen and then explaining why it didn’t. But anyway, nudge theory was developed, of course, by the mafia — one of the world’s most enduringly successful franchises.

Now, nudging had a resurgence after a scheme that put pictures of insects in urinals in a Dutch airport, because men are notoriously easily distracted as a species. And it was found that giving them a target to aim at, this provided a little incentive to stop spraying the entire room with unstoppable jets of waz. So it was highly successful. That is all about gentle prods to change behavior rather than extreme threats. Because one of the flaws in us as a species — the human race — is that we don’t really care about massive existential threats. The big nudge of the increasing prospect of a relatively imminent and high-grade Armageddon doesn’t seem to be motivating enough as a species. It’s too big and it’s too vague. So perhaps smaller nudges might work. Maybe all plastic bags should be forced to be shaped like a dolphin, might make us think more carefully about them. And maybe cabin crew on short haul flights should be penguins, trained to say very pointedly, “I’m afraid we have run out of ice.”

DUBNER: Andy Zaltzman, your perspective is most appreciated. Let’s hear now what the actual nudge practitioners have to say. Our first guest tonight is a psychologist and political scientist by training, whose career has toggled between government service and academia. It was he who launched the Nudge Unit under Conservative Party prime minister David Cameron. Years earlier he worked in the Prime Minister’s Strategy Unit under Labor Party P.M. Tony Blair. Would you please welcome the chief executive of the Behavioral Insights Team, David Halpern.

So David, as I understand it, you created the Nudge Unit with a two-year sunset clause in case it wasn’t working. At the beginning, what probability would you have assigned to the likelihood that it would turn out the way it actually has?

David HALPERN: I think certainly no more than 50/50. That was why we set it up with a sunset clause. Governments across the world are full of units that people set up that seemed a good idea, and they never get around to shutting down. And of course, that’s true for a lot of government policy. You set things up. We don’t know whether they work, and we carry on doing them nonetheless.

DUBNER: So of all the projects that BIT, the Behavioral Insights Team, has rolled out — regarding energy savings and tax compliance and I guess the biggest one is using automatic enrollment to increase pension savings — if you could put a number on it, what would you say is the median rate of improvement overall?

HALPERN: So, one of the key points is, most things don’t work. And that’s actually quite a difficult truth for people to come to terms with. There’s probably half a dozen which are sort of billion-dollar-impact plus. And then you’ve got quite a lot of things which are quite small impacts, and lots that don’t work.

DUBNER: Name a billion-plus project, please?

HALPERN: Top ones would be things like pensions. It’s a famous one. Getting people to pay their tax on time, now very widely used and replicated across the world. Less well-known, interventions on e-cigarettes, we think, is definitely a billion-dollar-plus in terms of its impact.

DUBNER: So e-cigarettes in the States, as I’m guessing you know well, are going toward the direction of what looks like it may be — not a total ban but very significant regulation. Can you tell the story of e-cigarettes here?

HALPERN: So we took a view, back in 2010-11, when the first reaction of a lot of folks in the health community was, “we should ban these things.” And we felt, on the basis of addictive behavior — you’re trying to introduce a substitute, it’s likely to be pretty effective. That was very controversial at the time. More recent reviews by Public Health England estimate that circa 40,000 extra smokers a year quit. So that is an absolutely enormous effect size if you work it out. Even if you discount it in the future, that is easily a billion-dollars-a-year impact.

DUBNER: There’s a lot of advocacy against what’s called “harm reduction,” right? People don’t want to introduce something that’s a half-step. So in this case, the use of nicotine but not delivered through a cigarette. Who were the biggest rivals of the suggestion that e-cigarettes be allowed?

HALPERN: Yes, there’s a whole industry, remember — understandably — who spent 30 years working against tobacco, and then someone comes along with a thing that looks a lot like a cigarette. You can understand the deep suspicion. So you’ve got a whole set of institutional and professional first instincts which are against.

DUBNER: Now the U.S. is moving probably toward a much stricter regulation than here. Have you had conversations with public health officials there, or what would you say to them to suggest that perhaps your route is, long-term, a better route?

HALPERN: Yeah well, we did take the view that we should regulate them. So for example, we really didn’t want them sold to kids. We didn’t want them to recruit new folks. There are regulations around: are they safe? Will they literally blow up? Regulations around dosage. There are actually quite a lot of regulations, so it’s not that we didn’t have it regulated. In fact, that was one of the key points, that it really matters that they do deliver enough nicotine to actually be effective. That remains, as it happens, an open issue because the regulation stops the level of strength. So it’s not that it’s not regulated.

E-cigarettes in the U.K. have been handled so differently than in the U.S. — where, as you’ve likely heard, there have been more than 30 deaths and many serious injuries from vaping — that our episode next week will be devoted to this issue. Because if you’ve only been reading the headlines about vaping in the U.S., then you’re missing out on a lot of the story. Okay, back to our interview with the founder of the Behavioral Insights Team, David Halpern.

DUBNER: So let’s talk about the tax revenues produced by better letter-writing. Walk us through— what was the problem, what were the proposed and trialed solutions, and what turned out to be most effective?

HALPERN: So if you think about it, most revenue services, they’re in the business of trying to get people to do something which they may or may not be super happy to do, which is to pay their taxes. On the other hand, you definitely don’t want to pay your neighbors’ taxes if they’re not. But for example, Stephen, in the unlikely event you were late paying your tax, you’ll get a letter saying, “C’mon.” And they would send the same letter to everyone, which is some form of threat or whatever.

And then, famously, the early trial was, “Why don’t we just tell people something which is true?” Which is, “most people pay their tax on time.” And adding that one line — “9 out of 10 people pay that tax on time” — would that lead to people then just paying up without further prompt? With no further action? And the answer is yes, it did, and indeed we tried multiple variations. Back then, that was unbelievably controversial. I just can’t tell you how —

DUBNER: Because why?

HALPERN: Well it was felt — lots of reasons why — “What, you want to start experimenting on people?” And, “the system wasn’t built to do it.” There were even questions about, “How would you analyze it?”

DUBNER: Andy Zaltman, have you ever gotten one these lovely tax letters?

ZALTZMAN: Not that I’ve noticed, but I don’t often open my post. But tax is essentially the original form of crowdfunding, but it’s got a bad reputation over the years. Really it needs to be done more positively. So do it like an appeal, almost like a charity, get a leaflet saying, “Sponsor a new chemistry lab in your local school.” Well that’s basically what tax is, isn’t it? Or, “Get Deirdre a new hip,” instead of just saying, “You’ve got to pay your tax.” Make it more positive. And also, let’s have some gratitude. Let’s have a nice thank-you letter. “Dear Mr. Zaltzman, thank you very much for your tax. People have been so generous this year,” and then telling you what they spent it on.

HALPERN: Well, you may be joking but I think half those ideas at least are worth testing.

ZALTZMAN: We can build a better world!

HALPERN: There is some evidence that giving people some say in — “What would you want to pay your tax on” — “I’ll say no to the nuclear warheads but I’ll say okay to the, you know” — “If there was a marginal extra pound or dollar, what would you prioritize?” — that actually people do feel better about it. And one of the quite deep questions even buried in there is that people do feel quite good when they give money to charities, right? In fact, as you’ll know from the behavioral literature, better than they think that they’ll feel. Why not when they pay their taxes? Why wouldn’t you feel good that you’re supporting schools and hospitals and so on? So one of the objectives of a tax authority, in my view, should not only be to collect the revenue, but actually help people feel okay about it. Why not?

DUBNER: It’s lovely to hear about all these successes, and impressive. I would like to hear a spectacular failure, please.

HALPERN: You want to hear some of the things that didn’t work?

DUBNER: All of them.

HALPERN: So images of homes showing, infrared images showing how much heat they’re losing. Putting them on a request to get your home insulated turns out to make people significantly less likely to get their home insulated. There was good lab work suggesting it was a good idea, but it looks like people were like, “Oh, that looks warm. That’s cozy. I’ll keep it.” Getting people who go to major airports to switch to public transport — really big effort — did absolutely diddly-squat.

Grit — a grit-based intervention, we’ve done a number of interventions with 16-, 17-year-olds. Some worked incredibly, but we found that grit-based intervention, at least in the U.K.— it did increase attendance rates, but it didn’t increase the pass rates. One was getting managers to be more sympathetic towards, basically, senior female staff. We spent a long time with a lot of academics designing this perfect intervention. It had the exact reverse effect. Getting male managers to be attentive in this particular way — they actually were less, ultimately, sympathetic.

DUBNER: And did it produce a bunch of sexual-harassment lawsuits in the process?

HALPERN: No, it didn’t work. But it does matter. Essentially, they are much less memorable. Kind of logically, or you don’t want to remember all the terrible recipes for food that didn’t work, right? You want to lose them. You want to remember the ones that did work. But actually, for this field it is really important. It’s quite a big lesson.

And a quite serious point is we work with a lot of governments, a lot of departments across the world. Everybody wants to celebrate and talk about the examples that work. The fact is, you should expect the majority of things you try, if they’re innovative and quirky, will not work. Right? That’s really important. And how do we get that information circulating? We need to be open about that. And actually, we need that list. We need that list available so that we don’t keep doing other things and repeat them.

DUBNER: Very good. David, we’d like to bring you back toward the end of the show to talk about the future of nudging. Let’s say goodbye for now. Ladies and gentlemen, that was David Halpern. It is time now to hear from some of the people who’ve been putting these nudging policies into practice. A lot of the behavioral science research we’ve talked about on earlier shows is about improving long-term health and welfare as individuals and societally. Tonight, we’re going to focus on some more immediate, high-stakes scenarios that you might not typically associate with nudging. Specifically, medical care, policing, and firefighting. So, our next guest is chief fire officer of West Sussex Fire and Rescue Services. She also holds a Ph.D. in psychology and she’s the author of a book called The Heat of the Moment: Life and Death Decision-Making from a Firefighter. Would you please welcome Sabrina Cohen-Hatton.

Sabrina COHEN-HATTON: Lovely to be here. Thank you.

DUBNER: So firefighting wouldn’t seem — at least to me — to be an area where behavioral science would have that much to say, or maybe it’s just a failure of my imagination. So tell me where I’m wrong and the kind of work you’ve done.

COHEN-HATTON: Oh, now you are wrong. It is exactly the place for behavioral science. So I’ve done a lot of research over the last decade looking at how incident commanders make decisions in very high-pressure, high-stakes situations. And the reason for that is because a huge amount of the firefighter injuries that we have are caused by human error. Eighty percent of accidents — across all industries, by the way, not just fire — are caused by human error. Not a problem with a piece of equipment, not a problem with a policy or a procedure, but a human mistake. The wrong choice and the wrong place at the wrong time.

DUBNER: So when you talk about bad decision making under stressful environments like that, I’m curious to know whether the primary driver is the uncertainty of what may happen or the physical pressure of time and danger.

COHEN-HATTON: Well, it depends how you quantify “bad,” because sometimes you can make the best decision, but the outcome is still going to be a terrible one. And one of the things that we find in the fire service is something called decision inertia, where you are paralyzed by all of the thoughts about what could happen and the repercussions and the accountability on yourself. And in some situations, you literally have to choose the least worst option. So we developed some techniques called decision controls. Because essentially what we wanted was for them to give you back control.

DUBNER: Andy, how would you feel about having something called a decision control attached to your own daily work?

ZALTZMAN: I think I could very much do with it. I get decision inertia in restaurants. So yep —

COHEN-HATTON: So the decision-control process is essentially a rapid mental check where commanders ask themselves: Why am I doing this? What’s my goal? How does this link up with what I’m trying to achieve? And they then ask themselves: What do I expect to happen? And then finally: How does the benefit justify the risk? When commanders were using this process, we measured the latency between the time it took to make a decision and action it, and there was no increase. So it didn’t slow down decision making. So that was really good. But what we also found was it increased their levels of situational awareness quite significantly.

DUBNER: And how is the training done then? Is it classroom training? Is it a simulation of a fire?

COHEN-HATTON: A bit of everything actually. They obviously need some input first of all in a classroom, but you need to be able to use it practically. So a lot of the work that we do is simulation-based. And to make it more realistic, we get derelict buildings and before they’re knocked down, we just set fire to them.

DUBNER: Yeah. Andy I could see you loving that.

ZALTZMAN: Well, maybe — the Houses of Parliament is basically derelict. Have a go at that.

DUBNER: Has your research led to changes in policy across the country?

COHEN-HATTON: Yes.

DUBNER: And what is the evidence that it is successful, and how do you seek out that evidence?

COHEN-HATTON: So, we developed these techniques and we tested them in a range of contexts. So we were using virtual reality. Then we repeated the same stuff on a training ground, and then we were burning buildings down and testing it in that environment. And when we analyzed the data afterwards, what we found is that the commanders that were using that process were significantly more goal directed, which was great. But you’re not going to collect everything by observing, right? Especially in that kind of dynamic, risky environment. So we strapped GoPro cameras to commanders’ heads, so we recorded all of that information, all of that data. And, critically, what we got from that is all of the information that they didn’t attend to, as well as what they did.

DUBNER: Do you have them watch the tape later and tell you what they were thinking and what they actually — yeah?

COHEN-HATTON: So we developed something called a cue-recall debrief, where we essentially were playing back the video to them, which provided a cue to recall their memory and got them to talk us through their thought process at the specific point in time, precisely in relation to what they were seeing.

DUBNER: Can you give an example of a resulting action then that would have been better than before?

COHEN-HATTON: So, what we found with the initial research is that 80 percent of the time, the decisions that were made were very intuitive. So for example, I’m going to put water on that fire, perfectly reasonable. You see and you do. But actually if you’re putting water on that fire but you haven’t joined up with the rest of the information that you’ve got about what’s on fire, which might be oil or petrol, where water is not actually going to help you in that circumstance, then you’re going to end up with the wrong outcome.

DUBNER: You respond to a lot of things these days that are not fires, correct? I mean, on your calls, what share of those are actually fires?

COHEN-HATTON: Actually, the number of fires that we go to has reduced over time, and we go to a much broader range of incidents. But interestingly, the number of incidents that we go to overall has reduced in the last 10 years by about 50 percent. And we do a lot more preventative work, there’s been some changes in legislation, which have assisted. But what that means for firefighters —

DUBNER: Is you’re going out of business.

COHEN-HATTON: Well no, because there’s always the risk, and we have to resource our fire services to risk, not to demand. But for our firefighters, it means you’re having less experience of the kind of things that you’re expected to deal with. So we’ve got to be much smarter now at making sure that we train in a realistic way, so people don’t just have the skills, but they have the confidence to deal with those situations.

DUBNER: Sabrina Cohen-Hatton thank you so much for joining us tonight. Our next guest has been a policeman for more than 20 years. He holds a master’s degree in criminology from Cambridge. Ten years ago, he started the Society of Evidence-Based Policing, which now has more than 3,000 members around the world. He spent most of his career with the West Midlands Police but recently became commander for specialist crime with the Metropolitan Police here in London. Would you please welcome Alex Murray. Alex, great to have you here. So the Metropolitan Police is essentially the London equivalent of the New York Police Department. Yes?

Alex MURRAY: Yeah. Absolutely. 30,000 officers, 10,000 staff. It’s a big outfit.

DUBNER: And I’m guessing that policing is full of conventional wisdom that is not, in fact, very wise. Is that a correct assumption?

MURRAY: There’s a very famous criminologist called Ken Pease who said, “You can have a career in policing, or you can have a year in policing just lived over and over and over again.” And senior officers have done one year on the front line, and then come out and they still refer back to their first six weeks when they came out of college. So from an evidence-based policing point of view, we try and meld the evidence of what is effective with that really good experience that people have — put them together.

DUBNER: Now when I think of, again, kind of like with firefighting, when I think of behavioral science and policing, I really can’t conjure up many images for how it works. Do you run randomized controlled trials? What do you do?

MURRAY: Yeah, we run randomized controlled trials, we run lots of things. Most things, or a lot of things, just aren’t effective, and we’ve got to work out, particularly in an austere time, what is effective and what isn’t.

DUBNER: All right. Give us an example of a success and how it came about.

MURRAY: Okay. So, the whole point of policing is around changing behavior. So how do you stop people committing crime? So if you are subject to a burglary, sadly, you’re at increased risk of being subject to a burglary again. And in the very near future. But not only that. Your neighbors are, and your neighbors’ neighbors, up to 400 yards either side of the house.

DUBNER: The reason being simply that it worked once, and it will work again or is there —?

MURRAY: Yeah. So, if you are a burglar you go, “It worked last time. I’ll go back. I know how the house is set up. I wasn’t caught last time.” There’s a sort of logic there. So in a big city in the U.K., we split the city in political boundaries, into pairs, and they were matched on socio-demographic bases, and in each pair one sort of had a treatment, one had control. And in the treatment area, if you were one of those houses that was at a high risk of burglary, or near repeat burglary, i.e., you were a neighbor, within 24 hours we sent an officer around who had a big sticker of an Alsatian and they stuck it on the door.

DUBNER: An Alsatian is a dog?

MURRAY: It’s a big German shepherd dog. So if any burglar looks at my window, they’ll think, “Well, there’s a dog in there.”

DUBNER: I could have a dog-shaped sticker without an actual dog inside, though. Correct?

MURRAY: Yeah, dog stickers on windows. They also had some window alarms and things like that. In the control areas, they got business-as-usual policing. And people could use their initiative — the police officer in charge of that area could do what they want. We call it target hardening, it’s not unique. And we tested the survivability of those houses. So over a period of 700 days, what was the chance of them being re-victimized compared to the control?

DUBNER: Yeah. And how’d you do?

MURRAY: From a metric point of view, there was less repeat victimization in the test area compared to the control area. In, actually the lower-crime areas, we did see statistically significant reductions in repeat offending, repeat victimization.

DUBNER: Andy, when you burgle a home do you typically return to the same home or neighborhood or move on?

ZALTZMAN: That was supposed to be our special secret, Stephen. No, I admit I — when I break into people’s homes, I like to do something positive, I like to leave something there rather than take something. Leave a nice antique or a bottle of cordial or a jar of pickled onions. Let’s get that positivity back into crime.

DUBNER: Alex, I understand you ran another experiment having to do with collecting money from speeding tickets?

MURRAY: Yeah in the U.K., you get speeding fines normally from automatic cameras. They spot you and they send you a letter, and it says, “Were you the person driving this car?” And you can then elect to have points on your license or go to a driver-improvement course. Now if you get one, and I have to confess I’ve had one, and it was written in legal jargonese, probably fulfilling all our legal requirements — so hitting the target, missing the point.

You know, policing is full of police officers and quite a lot of lawyers. And we speak and we write in a certain fashion. We don’t have this concept of person-centered design. And we experimented where for one week we just sent out that letter. For the next week we sent out a different letter. And that letter was a picture of a lamp post with some teddy bears around it, and data on the amount of children killed by speeding motorists in the area. And next week back to the normal letter, next week back to that letter. And we did that on and off. And that sort of experiment is being replicated around the world because it’s really important —

DUBNER: So, but you didn’t tell us, did the teddy bears work? Did the shrine to accident victims increase payment?

MURRAY: We are working through that data at the moment. It’s been replicated here in London and we didn’t find any statistically significant results. So, as we speak now, we’ve got analysts working through the data.

DUBNER: There was a research project that I believe you were behind, having to do with messages being written on a jail cell wall? That was you. Correct?

MURRAY: Yeah, this is my favorite experiment of all times. So if you go into a police cell in the U.K. — I don’t know, you probably both experienced it — where you sit and you look a blank wall for anything up to 24 hours, there might be a stencil on the wall that says, “If you want to get off drugs, phone this number.” But pretty much nothing happens. So we thought — well, captive audience. And we put a load of graffiti on loads of cell walls, and this was growth-mindset graffiti. Positive messaging purporting to be from an offender who’d previously been in there and graffitied on the walls.

DUBNER: Things like, “Next time, try to burgle a home in a different neighborhood.”

MURRAY: Would you like me to read you some of the graffiti?

DUBNER: Love it.

MURRAY: “People think that what they do makes them who they are. It doesn’t. We all do stuff because we got angry, because we felt good, or we didn’t think. I was pretty good at blaming others. But when I was here, I realized that this time it’s on me. What I do is my choice and I chose something else. When I left, I did things differently and it took effort. I won’t lie. But it paid off. Think, what’s the one thing you can do to make sure you don’t end up back here? Remember, and when a door opens, do it. It’s never too late.”

DUBNER: Now, just to be clear, these are fake sentiments, correct?

MURRAY: No, they’re not. We actually spoke to a reformed offender and we very much got the principles from him. So there was real integrity in it. And then we stuck it on the walls.

DUBNER: Now we know of this research because it was featured several years ago in a Freakonomics Radio episode. It sounded like a brilliant idea. How well did it work?

MURRAY: So we looked at all the people who’d been through all the cells with the graffiti, and all the people who’d been through the cells without the graffiti, and it had no effect whatsoever.

DUBNER: Ah! So I certainly hope we didn’t jinx it. Why do you think it didn’t work when you had such strong expectations?

MURRAY: It might be that we got the messaging wrong. It might be that growth mindset in a cell wall just doesn’t work. I’m not replicating it but someone else is going to replicate it with different messages, so we’ll see if it has an effect.

DUBNER: Alex Murray of the Metropolitan Police, thanks so much for joining us tonight.

*      *      *

DUBNER: Our next guest is a physician as well as a professor of medicine and health care management at the University of Pennsylvania. He’s also the director of the Penn Medicine Nudge Unit. He’s done many studies and interventions around patient compliance, physician behavior, and systems operations in health care. Would you please welcome Mitesh Patel. Mitesh, nice to see you.

Mitesh PATEL: Thank you for having me on.

DUBNER: How common is a nudge unit inside a hospital system?

PATEL: Not that common yet, but we hope to change that.

DUBNER: Were you the first?

PATEL: We were the first behavioral design team embedded within the operations of a health system in the world.

DUBNER: No offense, but in my experience doctors, in particular, are not very fond of being told how to do what they do. So I’m really curious how you pulled it off strategically.

PATEL: Yeah, it was a significant challenge. Many doctors, including myself, have been through a decade of training, and medicine has become more specialized and so people are experts in their fields. I think there are two key things that helped. One is to reveal to clinicians that they themselves are being nudged and they’re just not even aware of it. The design of the electronic health record is pushing you in a direction and sometimes you’re taking a lot of extra steps to do that. And the other is to engage them. The way we started off our Nudge Unit was actually to host a crowdsourcing challenge where clinicians and other stakeholders could submit ideas. And we actually got 225 ideas in two weeks from clinicians.

DUBNER: So give us a very quick rundown — we’ll drill down — of some of the interventions you’ve done.

PATEL: You know, one of the first things that preceded the Nudge Unit — and was actually the impetus for building it — was changing generic prescribing rates. And we were able to move the needle significantly, from 75 percent to 98 percent, almost overnight.

DUBNER: Just by switching the default?

PATEL: What happened was a rogue IT person was implementing something else around prescribing. And actually noticed this and said, “I’m just going to put a checkbox here, and if they don’t check that box, the prescription is going to go to the pharmacy as generic.” And the next week or so, the health system got a phone call from our largest insurer and said, “You just went from last place to first place in generic prescribing. Instead of penalizing you, we’re going to give you a bonus.” And the first thing everyone said is, “This is not possible. We’ve been last for years.” And then we realized what had happened: one hour of work resulted in $32 million of savings in the course of two years.

DUBNER: Unbelievable. So if there was that much money to be saved, so easily, why hadn’t this been done before?

PATEL: In the U.S., and I think in other countries around the world, the way that doctors make decisions changed in the last decade. It used to be all of this was done on prescription pads and over the phone, and so we didn’t have insight into what was going on, nor could we change it. But now 90 percent of doctors use electronic health records. And so most of the effort has been to get the electronic health record system set up, to get doctors using them, and there hasn’t been much testing.

So there are lots of good ideas that are locked into one department or one hospital and don’t get spread to other places. And there are lots of bad ideas that are implemented and never taken away. Our approach is to take a systematic way and test these things so we can scale the ones that work and turn off the ones that don’t.

DUBNER: I understand you also changed the default on the number of opioids that are typically issued after surgery, let’s say, yes?

PATEL: Yes, so when you come into the emergency department and you have an ankle sprain or you just got a tooth pulled or another injury, there’s good evidence to show the larger the amount of pills you get, the more likely you’re going to be addicted. And so we found that just by changing the default from 30 pills to 10 pills, it cut unnecessary opiate prescribing in half.

DUBNER: And did you find that 10 is actually an optimal number? Should it perhaps be even lower? Do you know?

PATEL: So there are actually guidelines around this that recommend that you should get three to five days of opioids, which is about 10 pills. But the great thing about using defaults is it doesn’t force you to make a decision. Clinicians can override that.

DUBNER: Mitesh, I understand there’s also an intervention you’ve done on cardiac aftercare, yes? Let’s say I come in. I have a heart attack. I’m treated. I’m alive and relatively well. I leave the hospital. Then what happens typically?

PATEL: So typically when someone comes in and has a heart attack, we know that exercise is good for them. There’s actually a structured program called cardiac rehabilitation or cardiac rehab. It’s a 12-week program. You go in for two, three, or more sessions, and you do exercise and get advice from a cardiologist. It’s like having a free gym membership with a cardiologist available for consultation. There’s essentially no harm to it. Everyone should get it. Our cardiac referral rate was 15 percent. Meaning 100 patients come into the hospital each week with a heart attack. Eighty-five of them go out the door never even being told that this exists, let alone that insurance covers it.

And so we worked with them to redesign this. And of course, we had to figure out what the problem was. And we found out it was a manual process. And the burden was put on the cardiologist. On a busy day of rounds, they had to identify who is eligible for cardiac rehab and fill out a form with 15 different fields: name, date of birth, medical record number — things that already exist in the electronic health record.

So we spent some time talking to cardiologists and testing things for three months. And what we did is we used the electronic health record to automatically identify patients who had a heart attack. Turns out that’s easy. They’ve had a stent placed or they’re on certain medications. We notify other care members, not the clinician on rounds. And when they arrive to that patient’s room, this form it’s automatically signed. And then we close the loop with the patient, which had never been done before.

DUBNER: Okay. You’re still dealing with the fact that they have to want to participate in that physical activity. Do you have any nudges to help with that?

PATEL: Yes, so this alone — just referring them — increased the referral rate from 15 percent to 85 percent, and the attendance rate from 33 percent to 55 percent. So, huge lift there. We have a bunch of interventions around getting people to be more physically active and we are actually testing them in combination with these referral patterns. Most recently we’ve working on gamification, and found that it increased physical activity.

ZALTZMAN: And I think you can take it further. We’ve seen the NFL basically damages people’s health. You can set up a league of something that improves people’s health — a National Cardiac Recovery League, and have a draft, and all the teams trying to sign up the illest patients and things. Look, I think it could be the next breakthrough sport for America.

PATEL: Well, I like where you’re going. We’ll have to test it in different settings and see how it works out.

DUBNER: One problem that the medical community has come to recognize lately, or acknowledge, is the issue of too much medical care in the form of tests and procedures and medications. Are you doing anything about that?

PATEL: Yes. So we have a great example from palliative cancer patients. These are patients who are at the end of life. They may have days to weeks left to live. And oftentimes they can get radiation therapy to shrink the tumor. Sometimes it’s pinching on a nerve. Other times it’s in an uncomfortable place and radiation therapy can make the end of their lives easier. In order to make sure that the radiation hits the tumor correctly, we’ll often do C.T. scans or X-rays.

Now, when we know we’re going to cure the patient, or we’re trying to cure the patient — the patient’s going to live many years — and we need to do an X-ray or C.T. scan possibly every day because we don’t want to hit normal healthy tissue. But there’s a lot of evidence — even national guidelines — saying, at the end of life, we shouldn’t be exposing patients to unnecessary imaging. It costs them a lot of money and it doesn’t lead to any benefit because the patients are going to die in a few weeks or months. What we found in our health system is 70 percent of these patients at the end of life were getting daily imaging. Meaning, if you had 14 doses of radiation, you got 14 X-rays or C.T. scans. And oftentimes insurances were no longer covering this, and they may get hit with some of the bill for that.

DUBNER: So let me ask you this. Medical history is full of stories about successful treatments being discovered, with strong evidence, and then not enacted for months, years, decades. And I’m curious what the rate of adoption is like for these interventions that to me sound sensible, doable, cheap, executable, etcetera. Are hospitals around the country rushing to — if not emulate you by setting up their own Nudge Units — at least reading these papers and trying to do things like switch defaults for generics and so on?

PATEL: Yeah. Our goal is to hopefully spread this around the country. And so we’re doing two things to really scale this. One is we host an annual nudges-in-health-care symposium, bringing together health systems across the world who want to implement nudges or nudge units. And the second is we’ve launched a Nudge Collaborative. It’s an IT platform where people can share insights from what’s worked and what hasn’t. We’ve worked on more than 50 projects now. So we have a bunch of successes but also failures and we don’t want those to be replicated. We want the good ones to be replicated. But it also provides a management tool. We’ve learned a lot from how we manage all the crowdsourcing ideas that come in and what moves forward and what doesn’t. And this is a platform that will help health systems who want to do this do it by hitting the ground running.

DUBNER: I have to say, I’m so glad you’re out there doing this work — it gives me hope and it’s exciting. Thank you so much for joining us tonight, Mitesh Patel.

PATEL: Thank you.

DUBNER: And now before we finish, if he hasn’t run off, I’d like to bring back the nudger-in-chief, David Halpern of the Behavioral Insights Team.

So David, we’ve heard about a lot of successful nudges tonight. Some failures as well. But I’ve got to say, many of them seem, in retrospect, quite simple. Even predictable, at least after the fact. Sort of low-hanging fruit. And I’m curious to know if you have larger aspirations. Maybe to consider how behavioral science can address core economic issues like income inequality and market failures, or issues like social mobility. Do you think about how to nudge the nudgers, the people in position to make bigger, broader applications?

HALPERN: So yes, to all those things. On economic policy especially, one of the deep ironies of behavioral economics is it hasn’t been applied much to economics. I mean, imagine a labor market where you actually knew where there’s a really good place to work — not just what you’ll be paid but, how good is a boss, what your progression opportunities, and so on. Most individuals applying, they don’t know the answer to those questions. And in the classical model, you sort of assume it’ll work its way out.

So we think there are lots of issues like that where the economy doesn’t work, and it takes you to a lot of policy tools. How can you help policymakers see their own failures? Behavioral government we call it. And one of the things is often the more senior you become, the more overconfident you become. And we’ve long thought that that should be part of the story.

DUBNER: In your book, Inside the Nudge Unit, you mentioned a few ideas that fell on political grounds. One of these concerned illegal immigration — as you put it, “breaking the implicit collusion between rogue employers and illegal employees.” Can you tell us a bit about that idea?

HALPERN: Yeah, I mean that’s come up in a number of countries, not only the U.K., where you get actually quite nasty, abusive practices. You know, if you’re an employer, you’re supposed to do loads of checks. That information often should exist. But actually, when someone — they have what’s called a national insurance number, it’d be similar in many countries — why can’t you make it easy and basically do the check for the employer? So we were pursuing that, and to deal with what otherwise is quite a nasty cycle if you think about it. Because then you get someone who actually doesn’t have a right to work, they are then quite abused in the labor market. So how do you break out of it? So yes, we certainly looked at some of these policy issues.

DUBNER: So considering that immigration policy was a strong driver of Brexit ideology, I am curious, David, if you stay up nights thinking about what might have been had those immigration ideas of yours been given a shot?

HALPERN: No, I think is your answer. I think a lot of other issues going on around Brexit. My view — there are some quite profound issues about what makes a nation hold together in relation to its cohesion. How do you feel — your sentiment towards your fellow citizen and beyond the immediate day-to-day of it. That is quite a consequential thing.

DUBNER: There is a topic you’ve written about in the past — social trust. Does nudge theory, if we want to call it that — is it able to be successfully applied to building social trust, do you think?

HALPERN: That’s one of the great questions. In fact, one of the big political areas in the early days, at the same time as the Behavioral Insights Team was being created, was then known as Big Society. How do you stimulate, how do you build, essentially, social capital, social support? Now, it fell by the wayside on the politics for various reasons, but the fundamentals of it remain absolutely key. So yes, if we take that simple trust question, “Do you think other people can be trusted?” It’s a better predictor of national economic growth rates than levels of human capital. It’s phenomenally important, how you feel about your fellow citizen. So beyond the shouting and the drama of what’s happening in Parliament, do you look over your shoulder if you don’t trust your neighbor?

DUBNER: But do you feel that you have tools in your arsenal that have been earned over the last ten years that can apply to this?

HALPERN: I mean, there’s an intervention we’ve done linked to itself a big program, which is called National Citizen Service, to give young people a brief experience of volunteering community service — loosely based on AmeriCorps-type thing. But within it we did intervention testing different icebreakers and one of the nice results is essentially getting young people to talk about in what way are we similar? Because it’s designed to mix people from different social backgrounds.

It doesn’t really seem to move the dial in relation to kids from more middle-class backgrounds, but the kids who come from quite often disadvantaged backgrounds, who — their life experience may well be — actually they can’t trust people quite often, that turns out to be—that icebreaker helps to move social trust. But it’s an area we’re definitely very active on. What else can you do in order to build that kind of connection?

An example in the U.K., and probably to some extent in the U.S., is that people go to university and end up with advantages, they earn more. But one of the things seems to be they end up with much more social trust and social capital. One of the factors is that you go away to university. Now that actually kind of matters that you break your social bonds from home, and you mix with a whole load of other people of different backgrounds and you generally are able to trust. You learn the habits of trust, which you then carry with you for a lifetime. It is an incidental thing but it’s so important to society. It matters.

We also tried to look at issues around conflict, which — we’re interested in doing a lot more on that — where you get, of course, literally ending up potentially at war or the re-ignition of combat. Well, if it’s important about human behavior, it’s not just about whether you pay your taxes on time, important though that is, but will you end up trying to kill your neighbor? You know, these really matter. Two wars start a year typically — why can’t we use these same kinds of approaches to see, can you make it less likely that will reignite in the future or occur in the first place?

DUBNER: I think that’s a brilliant point and probably a great spot to conclude. I’d love to check back in with you all in a year or three, assuming you still have a government and a sovereign nation of any sort. In the meantime, thanks to you, David Halpern, for appearing on our show tonight. Thanks also to Mitesh Patel, Alex Murray, and Sabrina Cohen-Hatton, to the inimitable Andy Zaltzman. And thanks especially to all of you for listening this week and every week to Freakonomics Radio. Goodnight.

*      *      *

Freakonomics Radio is produced by Stitcher and Dubner Productions. This episode was produced by Matt Hickey, with lots of help from Alison Craiglow, Greg Rippin, Harry Huggins, and Stephanie Tam; special thanks also to Katy King, Jason Ellar, Ed Flahavan, Lauren Yates, and Chris Wright from the Behavioral Insights Team. Our staff also includes Zack Lapinski, Corinne Wallace, and Daphne Chen. Our intern is Ben Shaiman. Our theme song is “Mr. Fortune,” by the Hitchhikers; this live version was performed by Luis Guerra and the Freakonomics Radio Orchestra; all the other music was composed by Luis. You can subscribe to Freakonomics Radio on Apple Podcasts, Stitcher, or wherever you get your podcasts.

Read full Transcript

Sources

  • Sabrina Cohen-Hatton, psychologist, author, and chief fire officer of West Sussex Fire and Rescue Services.
  • David Halpern, psychologist, and chief executive of the British government’s Behavioral Insights Team.
  • Alex Murray, commander for specialist crime with the Metropolitan Police, and founder of the Society of Evidence-Based Policing.
  • Mitesh Patel, physician, professor of medicine and health care management at the University of Pennsylvania, and Director of the Penn Medicine Nudge Unit.
  • Andy Zaltzman, comedian, author, sports-commentator, and host of The Bugle podcast.

Extras

Comments