Search the Site

Episode Transcript

I first met Steve Levitt, an economist and co-author of Freakonomics, about 20 years ago now. I was a graduate student in economics at the University of Chicago, a place where ideas live but fun comes to die, as I’ve heard said before. Steve was teaching an evening course there on how to come up with ideas. 

I remember one of the first ideas I pitched to him — by the way, Steve always gave us pizza during that class, which was pretty great. Anyway, one of the first ideas I pitched was about the link between Viagra and divorce. You see, at that time in graduate school, we were learning about the economics of the household — work by economists like Gary Becker, a Nobel laureate who had pioneered our understanding of human and social capital. Becker was also one of the first economists to study the economics of divorce. So, with Becker’s work fresh in my mind, and sitting in Steve’s class, listening to the unconventional ways he thought about problems, it made me wonder how the introduction of Viagra might’ve affected marriages. You see, on one hand, Viagra, which was and is primarily used by men, could make marriages stronger, if it allowed couples to engage in a way that they previously couldn’t. (And I’m being careful about my language here because my daughter sometimes listens to this show, not out of choice, but because it’s a requirement for her to get T.V. time in our house.)

But: you can also see how Viagra might lead to divorce, if it increased the “outside options” — by that I mean, other partners — that men perceived they would have because of their newfound ability. Steve loved the idea and encouraged me to look into it. So, I did. I basically looked for any sharp changes in divorce among older men in states where Viagra was adopted more rapidly right after its introduction in 1998. I sort of found some evidence of that, but it wasn’t convincing enough for me. So, I let it go. Years later, though, I revisited the slightly different question of whether Viagra use led to increases in STDs among older men. And I found that it did. Now, looking back almost two decades later, I can point to that moment, that class with Steve, as the start of my interest in the Freakonomics of medicine.

Last week on the show, I interviewed Steve Levitt, along with another economist, Emily Oster, about how they come up with ideas and why some of our projects fail. It was almost a full-circle moment. But this week, the tables are turned.

From the Freakonomics Radio Network, this is Freakonomics, M.D.

I’m Bapu Jena. I’m a medical doctor and an economist. And this is a show where I dissect fascinating questions at the sweet spot between health and economics. Steve Levitt recently interviewed me on his podcast, People I (Mostly) Admire. Today, we’re sharing our conversation here, on Freakonomics, M.D. You’ll hear a little more about how Steve and I met, why I wanted to start my own podcast, and why Steve has vowed never to do academic medical research.

LEVITT: I got burned so badly that I’ve never gone back.

*      *      *

LEVITT: A couple of months ago, I read a story in The New York Times about how some researchers had used the timing of birthdays to learn about the transmission of Covid and I thought, wow, what a brilliant idea. And then I read a few paragraphs further and there was a quote from you, Bapu. And I said, “Of course, it had to be Bapu. No one else in medicine thinks this way.” So that’s a pretty high compliment. 

JENA: I appreciate it. Thank you.

LEVITT: So, partly why you think differently is that you are not just a doctor, but you also got a Ph.D. in economics from the University of Chicago. And I think back in the day you even took one of my classes, didn’t you?

JENA: Yeah. You may not remember but you were on my doctoral committee. It was a long time ago. 

LEVITT: I was on your committee, but it was fake. I didn’t really advise you. They just needed a third person to sign. I would love to take credit for some of your success, but you and I will both agree, I probably didn’t help you very much when you were a grad student.

JENA: It’s the thought that counts.

LEVITT: So, before we get into all the interesting stuff about you and how you think about the world, I think it’s really important that we describe the standard way of thinking in medicine. And in medicine, which is true really in all the scientific disciplines, randomized experiments are seen as the gold standard, right? 

JENA: Yeah, that’s exactly right. Medicine is one of these areas where you start with this biological hypothesis about what drug might work and how it might work. But ultimately, when you move from the bench to the bedside, there’s so many things that can go wrong. These are high stake decisions, and what you think might work may not actually end up working. And that’s why we do these randomized trials — to give doctors a good sense of what works and what doesn’t work.

LEVITT: Okay, and that makes perfect sense because in medicine what we care about is causality and we want to know if you take this drug, will it make you feel better? And randomized experiments are the gold standard of causality because you’re able to hold everything else constant, vary one thing — do I give someone a placebo or a drug and then see whether the people who got the drug do better. But randomized trials aren’t the only research strategy used in medicine. There’s also this thing called epidemiology. 

JENA: Yeah. Why don’t we have randomized trials for every single decision we make? They’re costly to do. It takes time to recruit patients. Not all patients want to participate in an experimental drug. It sometimes takes years to get this sort of information. So, that’s why as a field, as a discipline, medicine has had to rely on other approaches to trying to understand causal questions like does drug A do a better job at treating disease than drug B. And so, there’s this field of — I would call it clinical epidemiology because epidemiology does a lot of different things. It studies the spread of disease. But this field of clinical epidemiology is really designed around trying to use real world, typically historical or observational data to understand whether one treatment works better than another. And in that data, you have information on sometimes tens of thousands or millions of patients who receive one drug versus another, and then a doctor or an epidemiologist, or maybe both, will work with that data to try to figure out whether that drug is better than some other drug.

LEVITT: Okay. And just to be clear, what differs between this and a randomized experiment is that nobody’s randomized who’s getting the drug. Some people are getting one drug. Some people are getting another drug and that’s choices that they’ve made, or the doctors have made, not a randomization. 

JENA: Yeah. Let’s suppose you have a population of patients with cancer, and you want to know whether or not a brand new, fancy, expensive oncology drug results in improved survival. One way you could do that is you could do a randomized trial where you randomize some patients to receiving that drug and other patients to receiving either placebo or standard of care, and then measure the outcomes, in this case survival. The other way you could do that is look at historical data on patients who receive that drug and compare their outcomes to patients who didn’t receive that drug. Now, what happens if you find that patients who receive that fancy new drug have worse survival? They’re more likely to be dead in the next year. If you look at that data, you might conclude that the drug actually harms patients. But what if the patients who are offered that drug are the ones who have worse cancer, who have passed through all other therapies up to that point and are really left with one option, which is this new experimental drug? In that case you would reach the wrong conclusion. It might be the case that the drug actually works, but because you’re not comparing like to like you’re comparing patients who are not similar in the treatment arm, which is this drug and the control arm, which is a different drug you’re going to come to the wrong conclusion.

LEVITT: Okay, exactly. So, what you’re saying is in clinical epidemiology, what you’re looking at are correlations between did you take the good drug and did you die? But if all other things aren’t held constant, like they are in randomized experiment, then you can come to the wrong conclusions. And of course, epidemiologists are not stupid, and they understand this. And they try to do the best they can to control for other factors, but they’re also limited by what data they have available or what data they think to collect. But you’re always left, at least, I’m always left, a little bit uncertain, maybe a little bit — well, I’m often very skeptical at the end of the day, when, as a consumer of the newspaper, I read about these epidemiological studies and I’m told to do one thing or another. 

JENA: Wait, are you telling me that eating peanuts at age six doesn’t cause dementia at age 66? You don’t believe that? 

LEVITT: Uh.

JENA: I got to retract my new paper, then. Okay. 

LEVITT: Okay. Economics and medicine have something really important in common, namely that both disciplines care about the answers to many questions where it’s hard to do randomized experiments. So, I’m not allowed to induce massive unemployment in some cities for my research to see what the effect of unemployment is. Or even in my own studies, I’m really interested in the effect of prisons on crime. And there’s no way in the world I’m ever going to convince a random set of states to let 20 percent of the prisoners out in that state and maybe another state will lock up another 20 percent of prisoners. It’s just not going to happen. Totally impossible. And so, what economists have put an enormous amount of effort and thought into is developing models that use this non-experimental data, everyday data, but that might plausibly have a causal interpretation because we look for special settings that mimic a true randomization and we call it a natural experiment. I actually prefer the name “accidental experiment” because I think it’s a better description of what happens. And that’s really what you do, but you do it in medicine. 

JENA: Yeah, correlation is not the same thing as causation. That statement, it’s endemic in the way medical students are taught, and that’s good and bad. It’s good because I think it introduces a healthy dose of skepticism in, in anybody who’s training to be a doctor about how to interpret a new study that shows there’s a link between red wine and whatever outcome. Now, the problem with that, though, is that it goes so far as to make doctors think, or at least the doctors I know think, that really the only way that you can get at questions of causation is a randomized trial. And it goes so far as if you’re writing an article for a medical journal in which you’re using a natural experiment, or you know, to borrow your words, an accidental experiment, any language that reflects causation — “this study shows that X caused Y” or “the effect of X on Y was this” — any language like that is typically removed from the manuscript because there’s this belief that you can’t use observational data to reach causal conclusions. And I think that’s a challenge that our field has to overcome.

LEVITT: Let me just try to explain how I explain natural experiments to people. A randomized experiment — a real randomized trial — has two key features. The first one is that the treatment group gets treated different than the control group. Okay. We all know that. The second key feature is that except for the treatment, we would have expected the treatment group and the control group to have the same outcomes on average. Okay? So, you start from that and then you say, so a good natural experiment just tries to mimic those exact features. We go out in the real world and we try to find settings where otherwise identical people, essentially by chance, get treated very differently. So, if we can do that we’ve more or less mimicked a randomized trial without actually running a formal experiment. You’ve had a lot more success convincing your colleagues that natural experiments have merit. So, what words do you use when you try to describe what a natural experiment is?

JENA: Steve, I don’t know that “success” is the word I’d use. You haven’t seen all the studies that I’ve tried to get published. You’ve only seen the successful ones. That’s a different bias. You know, the way I describe it it’s very similar. But when you see a paper in a clinical journal that presents the results of a randomized trial, most often the first table, the first exhibit in that study, shows the characteristics of patients who received a treatment and a control. And you can look at that table and you can see that the characteristics almost always are nearly identical between the two groups. And that gives doctors, I think, a lot of faith that these two groups are balanced and we expect the outcomes in those two groups to be otherwise similar, were it not for the fact that one group is going to receive a treatment and another group is going to receive a control. And therefore, any difference that we end up observing between those two groups in the outcomes is attributable to the receipt of that treatment, as opposed to underlying differences and characteristics between those two groups. What I’ve tried to do in most of the natural experiments studies that I publish in medical journals is try to, you know, essentially replicate that table one. So, if I’m looking at what happens to patients who are hospitalized during the dates of a national cardiology conference, when all the cardiologists are out of town, and I want to know the answer to whether or not care is different and whether that difference in care leads to differences in outcomes — let’s say mortality at one year later. The first question someone should rightly ask is, “Well, Bapu, how do you know that the patients who go to the hospital when cardiologists are out of town just aren’t different from the patients who go to the hospital when cardiologists are in town?” And I start by saying, “Well, do you have a reason for why they would be different? Do people choose to have heart attacks?” You’ve got to fight that criticism. And so, the way I fight is just to show it in table one. Look, the patients who are hospitalized with a heart attack during the dates of the American Heart Association meeting are identical to patients who were hospitalized with heart attacks during the surrounding weeks of the year.

LEVITT: And so, what happens to those folks who do show up at the cardiologist office only to find out the cardiologists are all at the national convention? 

JENA: Oh, what do you think happens? They actually do better. I’m sure you’ve seen this problem in economics, which is that was a controversial study. It had this very unexpected finding and I thought to myself, wow, let me replicate it because there’d be a lot of interest in showing that this finding could be replicated. We had the hardest time getting that study published. And you know what the journals would say to me? They said, “You already published this finding, so there’s nothing novel here.” And that actually has made me think a lot about this replication crisis. There’s no incentive to replicate findings, at least for these types of findings. And so now I write a study and I leave it to the world to replicate it.

LEVITT: Yeah, it’s so troubling. I’m sure it’s true in medicine. It’s been extremely true in psychology and in economics that so much of what is done could not be replicated, randomized trial or otherwise, but the incentives within these professions and at the journals are totally screwed up when it comes to trying to sort out the truth. You think that all these journals, that all these researchers should be after the truth, but really, it’s a very different game that’s being played, a game of how do I get published and how do I get citations for my journal? Look, I don’t have the answer, but I think if outsiders knew how academic publishing worked, they would be discouraged by it. 

JENA: Steven, if I knew how academic publishing worked, I would be encouraged by it. 

LEVITT: I once made a foray into trying to do natural experiments in medicine and I got burned so badly that I’ve never gone back. I started, I think, in 2004 when you were still a grad student, and I think it’s quite possible that you might’ve been my inspiration. And I’ll give you credit for what’s going to turn out to be maybe the most unsuccessful project I ever worked on in my life.

JENA: I’ll take credit for the lost year of your life. 

LEVITT: Okay. So, the idea is really simple. When a patient shows up at the emergency room, he or she has no idea what doctors are going to be working at that time. And if some doctors are better than others, then as a patient, I can get lucky or unlucky depending on which doctors are working. So, I didn’t assign any patient at random to any doctor, but the set of patients who show up are essentially random, and so I can compare outcomes across the time period, say, a Tuesday, July 29 — let’s say you’re working that shift. And let’s say that July 22, Tuesday, I’m working that shift. Then if the patients who come in on July 22, have better outcomes than on July 29, I’m a better doctor than you. That would be the kind of inference one would draw from these data. Okay. Does that set up make sense to you? 

JENA: Yeah, absolutely. Yeah.

LEVITT: Okay, so, I was working with a hospital. They had an E.R. The E.R. was staffed by three or four doctors at a time. So, I was able to get years worth of data and outcomes on the patients. And I was able to find some patterns in those data. And I was super excited and I went to present at what’s called the grand rounds, where the doctors sit around and they listen to someone who’s supposed to know something explain a new concept to them. And it was unreal. The reactions I got were confusion, anger, a consensus in the audience that had no idea what I was talking about. So, then I showed the results to my dad, who’s an eminent medical researcher. Probably has 500 publications. He’s got lifetime achievement awards in medicine and he’s a super smart guy. And I explained my method, and he listens to the whole thing, and he says, “Wait. So, you’re telling me that to figure out whether I’m a good doctor, you’re going to look at the outcomes of all the patients who come on my shift, not just the ones that I take care of?” Okay. And that’s exactly right. You understand. It was this moment of triumph. I had finally explained it. And my dad says, “That’s the dumbest thing I’ve ever heard of in my entire life. If I didn’t treat the patient, it’s not my fault if he has a bad outcome. That doesn’t even make sense.” So, that was the last natural experiment research paper that I ever tried to do in medicine. I never got it published. So, hats off to you that you’ve managed to rattle off a string of these papers. And I’m not sure we’ve really made it clear, but is it fair to say that when you started doing these natural experiments in medicine, literally were you not the only person publishing studies like that? 

JENA: I would say this is what I’m known for. There is a Canadian physician named Donald Redelmeier, who, I would think as the pioneer in this area. He did a lot of work with Amos Tversky and Daniel Kahneman years ago on behavioral economics. But besides that, there’s not a ton of people who really specialize in natural experiments. 

LEVITT: How many medical researchers are there published in journals would you guess? 10,000? 20,000? 30,000? 

JENA: Probably more than that? Yeah — 

LEVITT: Okay. 50,000. Okay. Okay. Let’s say there are tens of thousands of researchers. And there are, I think, by what you just said — let’s be generous. Maybe there’s a dozen people like you who are looking for natural experiments. If you look at economics, I would say something like a third or a fourth of the papers that are published in top journals are taking advantage of natural experiments. It sure seems to me like medicine’s making a huge mistake by not focusing more on natural experiments. 

JENA: Yeah. So, there’s a ton of research that’s what you might call health policy. So, what’s the effect of Medicaid expansion in one state on some outcomes. There’s much, much less work that uses natural experiments to try to answer questions that might be of clinical importance to a doctor. What therapy to provide or not provide. To your point, it’s the minority of observational studies in a medical journal that use these sorts of methods. Whereas in economics, you’d be taken into the coals if you didn’t have a good natural experiment. Now, I think the challenge though — and this is what I face is, it’s hard to find these natural experiments. I’ve gotten good at it and this is the way my mind thinks nowadays, and I’m sure that’s the way you think about the world. But it’s hard to find natural experiments for every question that you might want to answer. And the view of a lot of medicine has been, well, let’s just try to answer it anyway. Take for example, red meat. So, tons of studies that look at the relationship between red meat consumption and any number of outcomes, mostly cardiovascular — hopelessly, non-causal, right? There’s no way you can look at those studies and think that they’re causal and I, for years, tried to think of situations where I could find a natural experiment around beef consumption and, like, looking at whether or not, for example, when mad cow disease outbreak in England — was like 10 or 15 years ago. I looked to see whether or not there was changes in cattle output and consumption in parts of the world. And maybe that could then be used to show changes in cardiovascular outcomes. That was hopeless. I didn’t find anything there, but it’s hard to answer a question that people would want to know the answer to is, should I eat more or less red meat?

LEVITT: Yeah. Look, there’s a million questions I’d love to answer that I haven’t been able to answer with a natural experiment either because there wasn’t one out there or more likely, I just wasn’t clever enough to find it. But it sure seems to me that the medical profession is making a systematic error in the sense that, by putting so much effort into randomized experiments and so much faith in this epidemiology — but missing out on the middle ground just seems like an obvious mistake to me, and one that somebody, somehow — the profession should change and you’re the guy to do it.

JENA: I think that’s a good name for a podcast, “I’m the guy to do it.” Yeah. 

Well, that’s not exactly what we ended up naming this show! But anyway, coming up after the break: more of this special crossover episode of People I (Mostly) Admire, with host Steve Levitt interviewing yours truly, Bapu Jena. We’ll be right back.

*      *      *

Welcome back. Today we’re airing an episode of People I (Mostly) Admire. It’s part of the Freakonomics Radio Network and it’s an interview show hosted by the economist and Freakonomics co-author Steve Levitt. And the guest this time? Well … it’s me!

LEVITT: Now Covid has obviously been one of the most important events of our lifetimes. And it’s disrupted life. It’s taken many lives. Are you at all surprised that this far into it, we still know so little about transmission? 

JENA: Yeah. This is an area where I think economics, physicians, epidemiologists could have gotten together and figured out how to answer these questions in creative ways, outside of a randomized trial. I remember getting boxes from Amazon and my wife saying, “Just keep it in the garage for three days.” And she’s a doctor. And I was like, “You know what? Let’s keep it in there for four days, because I don’t know what’s growing on this box.” If you had detailed data, identifiable data — you know where people live. You know whether or not Amazon packages are delivered to them. You could look and I’m sure construct a good natural experiment to understand whether or not that sort of packing — touching boxes that other people, many other people have touched, are associated with higher rates of Covid-19. I’m sure there’s a good, clever way to design a natural experiment to answer that question. But the data was always there. It’s just hard to put together and get people behind it.

LEVITT: Yeah, I think that’s true. So, that reminds me of another topic closely related that I want to get your opinion on, and it relates to medical ethics because I have found that to be an area where economic thinking and medical thinking lead to very different conclusions. And so, I’m interested in hearing what someone who’s been trained in both areas thinks. So, let me take a very specific case, which is Covid vaccines. So, before Covid vaccines were approved for widespread use, the manufacturers ran randomized clinical trials into which volunteers were randomly assigned to either be vaccinated or not. And these needed to be big trials to get enough data, maybe 30,000 people. And it took a long time to enroll the people. And then we just had to wait it out to see which of those 30,000 people were going to get Covid and what the outcomes were going to be of those who got Covid. Those trials took about four months, but we could have cut that four months if we had done what’s called a human-challenge trial, and that’s where you vaccinate people and then you expose them to Covid intentionally to see how they do. And the value of a challenge trial is that you need a much smaller number of volunteers and you don’t have to wait around for people to get exposed to Covid in their everyday lives. But medical ethicists say it would be immoral because these trials expose volunteers in the study to risk. But as an economist, that drives me so crazy. So as an economist and a doctor, where do you come down on that kind of issue? 

JENA: I usually keep my mouth shut. No. Where I come down it is three words, “Willingness to pay.” I thought you were going to say, if you offered these volunteers in these human challenge trials, a hundred thousand dollars to participate.

LEVITT: Oh, a million dollars — 

JENA: $10 million to participate. 

LEVITT: Yeah. Absolutely.

JENA: You could throw up tens of millions of dollars and it would have been worth it. It would have been a bargain at that price. I would go even one step further. I mean, think about the architecture for clinical trials. Like, why does it take so long for clinical trials to get done? One of the reasons why is because it takes so long to recruit patients. As medicine gets better and better, guess what? It gets harder and harder to recruit patients because nobody wants to be in the treatment arm because the control arm, the standard of care, gets better and better. We’ve seen this in HIV, I’m sure it’s present in other areas. And in that case, you think about what’s the value of information that is generated by a randomized trial. It allows people across the world to be treated differently. The number of life years, the value of that life in the economic sense is staggering. People have talked at length about whether or not we should pay clinical trial participants to participate. As an economist, I would say, “Why not?” I certainly understand the ethical challenges that are involved, but I’ve got to think to myself, there’s got to be a way to balance those ethical challenges. It can’t be this kind of binary decision where compensation in any form or an assessment of trade-offs between the person and the public is off the table. We’re making those sorts of trade-offs now, as we’re thinking about public mandates for vaccines. So, it’s not like society doesn’t make those sorts of trade-offs and decisions in other aspects of our health. It just doesn’t happen in clinical trials. 

LEVITT: It seems to me totally obvious that with Covid we should be doing these challenge trials. and like you said, we should pay the volunteers a million dollars, $10 million. We should have lines out the door of people saying, “Please put me in this trial, give me the vaccine, and then expose me to Covid. I’m willing to do that.” So, I had Doctor Slaoui on my show. He’s the doctor who led Operation Warp Speed. And I asked him — I thought he would agree with me. I said, “Why not do human-challenge trials?” And his answer was “Well, they wouldn’t be any good because you can only include people in those trials who aren’t really at risk of being hurt for Covid.” I said, “No, I want to do it on the sickest people.” I’m sure there must be 80-year-old people who, maybe they got cancer, maybe they’re high at risk. They want to get $10 million so their family can live well after they’re gone. They’d love to be in that trial, even if there is a real chance that they’ll die from it.” He could not have disagreed with me more. And I was really surprised. I’m always surprised at how pervasive and how deep the ideas in medical ethics are that just collide completely with an economic way of thinking. 

JENA: I agree with you completely. Take it to an extreme, an individual who is in the ICU, who is ventilated, meaning a machine is breathing for them, who is so sedated that they’re unlikely to be able to recover from that, like highly unlikely. You could imagine doing trials in that setting. Now, of course there’s obvious ethical issues around autonomy. That person wouldn’t be able to make that decision. And I certainly don’t want to dismiss those, but there’s got to be some gray area where that sort of thinking would make sense. It turns out we actually think like that in a lot of other ways. So, for example, we are not likely to transplant organs, which are a scarce resource, into people who have very limited life expectancy after organ transplant. So, clearly we’re rationing a health product. We clearly think very carefully about intensive care or aggressive measures for people who are at the end of life. Why is that? Because we’re making a trade-off. There is a chance that something would work, but we’re balancing societal needs, costs of care with the likelihood that they would benefit. So, it’s not like these sort of trade-offs don’t happen all over the place in clinical medicine. But for some reason in this area they’re walled off and I think it’s a limitation.

LEVITT: I think the reason is it comes back to the idea of doing no harm and the idea of actively going out and hurting a volunteer intentionally, even though it’s a volunteer, and even though it’s one person to save a hundred thousand lives, I think that flies in the face of what the medical profession feels like it’s their job to do. 

JENA: Yeah. I know this historian and he told me the story about Hippocrates. So, apparently before Hippocrates said, “First do no harm.” He said, “Maximize social welfare.” And that didn’t go over so well. He had to go to the second best. 

LEVITT: So, you’re starting a new podcast. Now, seriously, does the world really need more podcasts? I think we had exactly the right number of podcasts the day before I started this podcast. My podcast pushed us over the edge. So, what do we need your podcast for? 

JENA: Wow. So, the name actually — it’s highly creative. It’s Freakonomics, comma M.D. Why fix something that’s not broken? But, you know, the type of thing I’d like to do in this podcast is just introduce people to a different side of medicine. The part of economics that’s always been most fascinating to me and I suspect to you is the ability to answer questions in this really creative but also rigorous way. Now, there’s a lot of economic studies that are really creative. But when you think about what the implications are of that idea, it’s hard to stretch out an implication that is really going to matter for someone’s life. The beauty of, I think, Freakonomics M.D., the podcast, is it takes the elements of Freakonomics that I’ve always liked and marries it with something that is going to matter for people, which is their health and their wellbeing.

LEVITT: Up until now, there’s only been one project where Stephen Dubner and I allowed the use of the name Freakonomics without having day-to-day control of the operations, and that was the Freakonomics movie. And let me just say, it didn’t turn out exactly as we hoped it would. So, we’ve been cautious since then. And the fact that we’re allowing Bapu to use the Freakonomics name in his new podcast shows you just how much faith we have in him. My own personal favorite episode of Bapu’s podcast is called “Do As Doctors Say, Not As Doctors Do.” Thanks for listening. 

JENA: And thanks to Steve Levitt for that fun conversation — and to all of you for listening to this and every episode of Freakonomics, M.D. We really appreciate it. Let me know what you think about the show so far. My email is bapu@freakonomics.com. That’s B-A-P-U at freakonomics dot com. And, if you haven’t subscribed yet to People I (Mostly) Admire — you’re missing out! You can find both shows wherever you get your podcasts. Thanks again.

*      *      *

Freakonomics, M.D. and People I (Mostly) Admire are part of the Freakonomics Radio Network, which also includes Freakonomics Radio and No Stupid Questions. The shows are produced by Stitcher and Renbud Radio. You can find Freakonomics, M.D. on Twitter and Instagram at @drbapupod. Original music composed by Luis Guerra. This episode was produced by Morgan Levey and mixed by Jasmin Klinger, with help from Mary Diduch and Eleanor Osborne. The supervising producer was Tracey Samuelson. Our staff also includes Alison Craiglow, Greg Rippin, Emma Tyrrell, Lyric Bowditch, Jacob Clemente, and Stephen Dubner. If you like this show or any other show in the Freakonomics Radio Network, please recommend it to your family and friends. That’s the best way to support the podcasts you love. As always, thanks for listening.

JENA: Can I just make a suggestion? It should be Freakonomics M.D., Ph.D. I have to I drop the mic with that, right there. 

Read full Transcript

Sources

Resources

Extras

Episode Video

Comments