Search the Site

Episode Transcript

SCIENTIST: Ladies and gentlemen, you’re about to see the results of a bold experiment in human intelligence.

So, I don’t exactly have the results of a “bold experiment in human intelligence.” But I do have a clip from the T.V. show It’s Always Sunny in Philadelphia. It’s from the episode where one of the main characters, Charlie, joins an experiment where he takes a pill he’s told will make him smarter. If you’ve ever watched the show, you’ll recall that Charlie — he isn’t that sharp. But after taking the pill, Charlie starts speaking with a British accent and comes up with his own experiments and inventions.

CHARLIE: We have the means, the understanding, the technology, to allow spiders to talk with cats!

Umm, spoiler alert — it didn’t actually work out.

SCIENTIST: Alas, a complete failure. Self-confidence instilled by a placebo intelligence pill did not, in fact, make the test subject smarter.

A “placebo intelligence pill.” So, in this experiment, the researchers gave a fake pill to Charlie that had no way to physiologically make him smarter. And yet —

TANKSY: The subject believed himself to be a mathematical wizard.

CHARLIE: plus nine equals box, all right? That’s where the cat goes.

You’ve probably heard about the placebo effect. The basic idea is this: someone who has a medical problem is given a pill they’re told may help them. But the pill doesn’t actually contain anything that would treat that problem — it could be just sugar. And strangely, that person gets better anyway, or at least thinks they get better. But here’s the twist: people who take an active drug can also experience the placebo effect.

From the Freakonomics Radio Network, this is Freakonomics, M.D. I’m Bapu Jena. I’m a medical doctor and I’m also an economist. Each episode, I dissect an interesting question at the sweet spot between health and economics. Today on the show: What explains the placebo effect?

Ted KAPTCHUK: It goes beyond thinking that you can think yourself well, and it goes beyond blaming patients for not getting well.

Can it really help patients get better? And if so, how can doctors use it?

*          *          *

I wanted to start today’s episode on the placebo effect with an unlikely expert.

Anup MALANI: Anup Malani, professor of law at the University of Chicago.

Anup is a lawyer and an economist by training, and he studies all sorts of questions, ranging from judicial behavior to infectious disease control to blockchain. So, what is a lawyer slash economist doing studying the placebo effect? A while back, when Anup was a grad student at the University of Chicago, he was trying to come up with a research project that would really impress his professors. And maybe even contribute something important to the world (I’ll say that having been a student at Chicago myself, that may have been a second priority). Anyway, one day, when he was on his way to the airport —

MALANI: I don’t know why I was thinking about it. I was thinking about how clinical trials worked. And there were two different clinical trials that I had read about where people were given treatment but their probability of getting treatment was different across the two trials.

These trials were for the same drug.

MALANI: And then I started saying, well, what is the difference between folks that are in the treatment group of the first trial and the second trial? And the main difference was not the treatment they got, but what they expected they were going to get.

Anup is referring to what are known as placebo-controlled trials. It’s when patients in a clinical trial are randomly selected to receive either an active treatment or a placebo pill. In lots of trials — let’s say, a trial for a cancer drug — there aren’t placebos, because it would be unethical to do that. In those cases, instead of a placebo, some of the patients in the study are given an existing drug we know works.

But in a placebo-controlled trial, generally, patients don’t know whether they got the treatment or the placebo. We do this so everyone in the study has the same expectations going in, since even just the expectation of getting a treatment might cause patients to feel better or do better. And that’s why clinical trials have placebos in them. If doctors are going to prescribe a drug, they want to know what the actual effect of the drug is, independent of any improvements due to the patient’s expectations.

MALANI: When you take medication, there’s a change in your body hopefully an improvement. Pain goes away. Your cholesterol levels decline. Your physiology changes. But it is also possible that when you take that medication, your expectations about what’s going to happen to your body changes. And as a result of those expectations, your body actually changes.

And the placebo effect doesn’t just happen with medications, or even medical products.

MALANI: Anything that might affect your body could be mediated by your expectations about its effect. Could be exercise equipment, watching programs lots of different things.

Now, let’s get back to that story I was telling you about Anup’s graduate school project. In lots of studies, you’re equally likely to get the drug as the placebo. It’s a coin flip. But the studies Anup was thinking about on his way to the airport — the split between the placebo and the drug wasn’t 50-50. In some studies, people were more likely to get the drug versus the placebo, or vice versa.

MALANI: You sometimes find trials of the same drugs with different probabilities of treatment, because maybe they’re looking at different dosages or they’re looking at some other medications, for example.

Anup realized that those differences — a higher likelihood of getting either an active treatment or a placebo — might tell him something.

MALANI: You can use the fact that there are different treatment shares to identify, to measure, to quantify the degree of this thing called a placebo effect.

Anup had an idea how to do this. He looked at data from a bunch of clinical trials for drugs that targeted stomach ulcers — and that had different treatment shares. In other words, different likelihoods that a patient in a trial would receive an active treatment rather than a placebo. In some trials, patients knew they either had a 50 percent chance of getting the active treatment, or they knew that they would all would get the active treatment because the trial didn’t include a placebo control group. Anup wondered, would the patients in the trial, where they knew they had a higher chance of getting the drug be more optimistic about getting treated? And if so, would that change how effective the treatment was for the people who got the drug?

MALANI: In fact, turns out to be yes, if you look at anti-ulcer medications. And what you find is that if you go from a trial where 50 percent of the people get treated to, say, one where everybody’s getting treated, outcomes in the treatment group improved by anywhere from two to about 15 percent.

Let me spend a little time with this since Anup’s comments may seem counterintuitive. If everyone in a study gets the drug that’s being tested, how can there be a placebo effect? So, just to be clear: the effectiveness of any drug actually includes two things. The first is based on the actual biology or chemistry behind how the drug works in the body. The second is based on our expectations. If we expect a drug to work, it may work better. That’s the idea of a placebo effect. It’s all about our expectations, not about the actual biology or chemistry behind the drug.

Drug manufacturers and the F.D.A. know that any changes a drug may have in the real world are the result of the physiologic changes the drug triggers, PLUS the inherent placebo effect. What Anup is measuring is the degree to which a person’s expectations change a drug’s effectiveness. He’s getting at the placebo effect. If you’re in a clinical trial and you received the treatment, Anup’s analysis shows that the effect of that treatment is larger when you knew — with 100 percent certainty — that you were getting the drug, not a placebo.

Anup did the same analysis with statins — drugs that reduce cholesterol. Again, he looked at data from two types of trials: one where 50 percent of the participants were going to receive the treatment, and another group of trials where everyone got the treatment. And what did he find?

MALANI: There’s 25 percent greater reduction of cholesterol.

In other words, participants in the trials where everyone got the treatment saw an additional 25 percent reduction in their cholesterol after treatment. That’s compared to people who got the drug but were enrolled in trials where up front, they had a 50 percent chance of getting treatment and a 50 percent chance of getting placebo. So, that additional effect was driven by their higher expectations they’d get the drug, instead of a placebo.

The benefit of running this analysis with something like a cholesterol medication or an anti-ulcer medication is that the impact of the drugs were easily measurable. Cholesterol either goes down, or it doesn’t; an ulcer gets better, or it doesn’t. And cholesterol levels, in particular, are objective. Cholesterol is measured in the blood, so this is not a story about how patients feel after receiving a drug. This is an objective improvement.

MALANI: So, it’s not about, you know, self-reporting of something where your expectations might just psychologically change your beliefs. It’s actually physiological changes that we can objectively measure. And even those improve when you increase your probability of getting treatment.

Self-reporting is an issue with a lot of placebo literature.

MALANI: A lot of the outcomes that are measured are subjective, not objectively measured. In fact, there’s a very famous paper in 2001 in The New England Journal of Medicine by Hróbjartsson and Gøtzsche that cast doubt on the placebo effects by saying, “Look, the only studies that find placebo effects are studies that look at subjective outcomes like pain. And so that could just be a subjective response and not a real phenomenon.” And so, I wanted to look at something that was objective. And so, that was one of the goals of that paper, both a neat way to quickly find placebo effects with existing data, and also to make sure that they have an impact on objectively measured outcomes.

Anup’s paper was published in 2006 in The Journal of Political Economy. People had already known about the placebo effect by that time, but I think what Anup did was cleverly show the important role and mechanism of expectations in driving treatment effects, by relying on variation across clinical trials in patients’ beliefs about treatment.

This wasn’t all Anup did though. He wanted to answer another question about the placebo effect.

MALANI: What can trigger placebo effects?

We’ll try to answer that question — and just to set expectations, it’s not an easy one — right after this.

*          *          *

We just talked about how our expectations about a treatment might impact how effective that treatment actually is. You may note the similarity to something else that we are exposed to probably hundreds of times a day: advertisements. Have you ever watched a Domino’s commercial and craved pizza? Maybe you remember it tasting better than it actually did.

Law professor, economist, and part-time placebo researcher Anup Malani and his co-researchers, Emir Kamenica and Robert Naclerio, up on this similarity and wanted to see if there was a way to trigger the placebo effect in more of a real-world context using — wait for it — advertising.

PATIENT 1: This is big. A chance to live longer.

PATIENT 2: I can do more to lower my A1C.

NARRATOR: Has Asthma pushed you into a smaller life?

So, back in Anup’s lab —

MALANI: What we did is we took individuals and we gave them Claritin.

Claritin is a popular allergy-relief medication. In a controlled lab setting, Anup and his colleagues induced allergic responses in study participants to look at the placebo effect in Claritin. Don’t worry, what they did wasn’t harmful.

MALANI: We gave them what’s called a histamine challenge. We took some histamine. We put it on their skin. Typically for an individual, regardless of who you are, that’s going to generate a little red welt. It’s a short-lived welt. And you measure the size of the welt, and that tells you how big your histamine reaction was, your allergic reaction. You can do that for everybody.

So, all the participants in their study were given a little allergic reaction, and then, they were given Claritin too.

MALANI: And we checked to see how much the Claritin reduced the size of their allergic reaction to the histamine, the size of that red welt.

That was the basic set up. But, they manipulated one thing.

MALANI: We said, “We’re going to give them information about how effective that Claritin is going to be, but we’re going to give them different information.”

After the patients took their Claritin, they all watched a movie — Shakespeare in Love, to be exact. Great choice. But during that movie, the participants were shown different advertisements. One group of participants didn’t see any ads related to allergy drugs. A second saw ads for Claritin. And a third group saw ads for Zyrtec, a Claritin competitor.

MALANI: Interestingly, the Zyrtec ads sometimes tell you that Claritin is not effective, that it takes too long to work.

So, what did the researchers expect to find?

MALANI: That the Claritin ads may cause an improvement in the efficacy of Claritin. And we didn’t really have a view on Zyrtec. We didn’t know what that was going to do.

And what did they actually find?

MALANI: The surprising result was there was no actual difference between the control group and the Claritin ads group.

Meaning, in both the Claritin-ad group and the no-ads group, the welts got smaller, but by the same amount. As for the people who saw the ads for Zyrtec —

MALANI: You actually saw worse outcomes.

Remember: everyone in the trial, regardless of what kinds of ads they saw, had taken Claritin.

MALANI: So, the actual pharmacological efficacy Claritin had it was substantially diminished by looking at the Zyrtec ad, which as I mentioned, said Claritin doesn’t work.

In other words, the researchers documented what’s called the nocebo effect. It’s a cousin to the placebo effect. Where placebos relate to positive changes, nocebos trigger negative changes.

MALANI: I tell you something’s not going to work, and it doesn’t work.

Anup and his colleagues were really surprised that the negative Zyrtec ads made Claritin less effective, meaning the allergic welts on people’s arms didn’t shrink by as much when they got Claritin. So, they did the study again, just in case. The set up was the same, except there was no control group of people who didn’t watch ads. They published their findings in 2013 in the Proceedings of the National Academy of Sciences, or P.N.A.S.

MALANI: And we found that, in fact, again, people that got Zyrtec ads didn’t see as large an efficacy of Claritin as people that got the Claritin ads, confirming this idea that something about the Zyrtec ads — and we suspect that it’s the negative statements, but I’m not 100 percent sure — actually caused the efficacy of Claritin to fall relative to watching Claritin ads.

Just to note: Anup’s findings were only really present in people who don’t have allergies to begin with. They didn’t see the effect in people who have allergies and take allergy medications regularly. Still, what does this finding mean? Well, let’s go back to the original question Anup wanted to answer: what exactly triggers a placebo — or in this case, a nocebo — effect?

MALANI: It raises the possibility that advertisements with negative statements about other companies’ products actually have negative impacts on those other companies’ products. And so, you could imagine that one plausible regulation would be, “Hey, you can say positive things about your drug, but you can’t say negative things about other people’s drugs.” And it gets to this very deep question of what do we do about placebo effects? Because if your positive expectations lead to positive outcomes, it raises the question “Well, can I tell you false positive things to lead to actual positive outcomes? Is that okay?” Um, that’s tough, right?

The thorniness of Anup’s questions here are what’s challenging not just about the placebo and nocebo effects, but about what they mean in the real world. Even in Anup’s study, the researchers manipulated what their subjects would see; they tried to affect participants’ expectations of a drug’s efficacy in as close to a real-world setting as possible. The researchers surveyed the participants about their expectations before and after they took the drug, but how can we be really sure that people’s expectations were changed? And can we be 100 percent certain that it was that shift in expectations alone, and nothing else, that led to the effects that they observed? What if the treatments led to some shifts in behavior that the researchers just didn’t see? What if there’s something else going on entirely?

It’s this fuzziness around a patient’s expectations that’s led another researcher to think about placebos a bit differently than most.

KAPTCHUK: What I think a placebo is, is the effects of the rituals, symbols, and interactions on a patient in a therapeutic encounter. That’s it.

That’s Ted Kaptchuk. He’s a professor of medicine at Harvard Medical School and directs the placebo studies program at Beth Israel Deaconess Medical Center in Boston. Now, if you recall, a common definition of the placebo effect focuses on how expectations shape the efficacy of a drug.

KAPTCHUK: I don’t use the word “expectation.” I think it’s a kind of psychological word that actually confuses the situation. I think the evidence is weak. And most of my colleagues in the placebo world don’t like me because of that. That’s okay. First of all, I don’t know where expectation exists. It’s not quantifiable. It changes every moment, moment to moment. It has the idea of a self-fulfilling prophecy. I don’t believe you can think yourself to better health.

Not to mention, Ted says it’s really hard to take research results about expectations imposed on study participants in lab settings out into the real world. This can be particularly true for people with hard-to-treat conditions like chronic pain, people for whom placebo effects might be most useful, according to Ted. Anup understands the concerns about weighing a drug’s expected efficacy too heavily, especially in the context of drug regulation.

MALANI: There’s this worry I don’t know if it’s warranted, but it’s out there and it doesn’t seem unreasonable that, that the placebo effect is fickle. Your expectations about a drug might change, and that means the efficacy of the drug might change. And so, we can’t be sure, once we approve the drug, that it will always be effective, right? If people stop believing in the drug, it might become ineffective. And the F.D.A., you know, it’s not a system that says, “Okay, we check for efficacy every few years to make sure everything’s effective.” You know, once you’re approved for efficacy, you always are approved for efficacy. So, the fickleness makes you a little bit cautious.

And Anup’s also worried that companies could abuse the placebo effect for financial gain.

MALANI: You can imagine that there’s a lot of worry that drug companies will try to market the drug in a deceptive manner just to generate those placebo effects. And that’s something that we worry about because it might more broadly just alter trust in medicine.

In clinical trials of drugs, the F.D.A. accepts that the placebo effect, to some degree, is embedded into the efficacy of the drug to begin with. But it’s hard to fully tease out this effect, which is why Ted’s point is interesting. For him, the placebo effect occurs because of something other than what you think might happen when you take the drug. He thinks there’s something else going on, something deep within our bodies, something that we aren’t aware of.

KAPTCHUK: And it goes beyond thinking that you can think yourself well, and it goes beyond blaming patients for not getting well. I think you can explain the placebo effects by non-conscious processes that are deeply embedded in the nervous system.

Ted started down this line of research after decades of treating patients with various types of placebos.

KAPTCHUK: Patients in randomized control trials are desperate. They’ve already failed many kinds of therapies. What we found was that patients didn’t expect to get better. They actually were in despair and worried about whether they were on placebo all the time. They really didn’t want to be on placebos.

Like Anup, Ted also has had a circuitous path to researching placebos. In the 1960s, he —well, he went into hiding for a bit.

KAPTCHUK: I was pretty involved in the ‘60s in ways that were, like, everything was a little excessive.

Long story, but he was worried he’d be called to testify against some people he’d gone to college with who’d been accused of some pretty serious crimes.

KAPTCHUK: People said to me, “Please disappear, Ted.”

And so, he did. An Asian civil rights group in San Francisco took him in and he stumbled into learning Chinese medicine, which led to studying alternative medicine at Harvard.

KAPTCHUK: I studied with the most fantastic teachers at that time at Harvard. And they said to me, “Ted, your job here is to help us find out whether any of these therapies are more than placebo. That’s your job.” And I’d say, “But exactly what does this placebo effect mean?” And they would say, “Oh, the placebo effect is the effect of an inert substance.” And I go, “Holy God, these are really wise people, but there’s some blind spot here.” They just told me an oxymoron, “the effect of something that has no effect.” And I realized that no one was paying attention to it. And I said, “I’m going to do that.”

The journeys our lives take, right? Fast forward a couple of decades later, and Ted has become one of the foremost experts on the placebo effect. He’s gotten N.I.H. grants and his research has been published in some of medicine’s top journals. Some of his work has been criticized, but the more his work gained recognition, the more he became concerned about the deception inherent in his placebo work — that the participants in his studies didn’t know whether they were receiving a placebo or not.

KAPTCHUK: The more I got to be an expert, the worse I felt, because I said, “Ted, this is all unethical to use. What can I do to make the placebo not be marginalized and worthless?”

So, he wondered: what if he just told the participants whether or not they were getting a placebo? Would placebos have any effect if people knew that’s what they were getting? And if so, what would that mean for medicine? And so, Ted started a new phase of experimentation, in which he did just that. The first paper in this line of research was published in 2010. Ted and his co-authors looked at patients with irritable bowel syndrome. Why I.B.S.?

KAPTCHUK: Just based on all the reading I had done, all the work I had done, and actually a major article I published demonstrated this, that placebo effects don’t change objective pathophysiology. They don’t cure malaria. They don’t shrink a tumor. But they do change how you perceive symptoms. So, it relieves pain. It helps with pain of I.B.S.. With that kind of research before me, I said, “I’m going to pick diseases that in double-blind studies have high placebo effects and that those diseases are defined by self-report, self-observation, meaning subjective outcomes. And that’s where I think the pay dirt is.

Now, it sounds like this goes against some of the work that Anup did. Remember, Anup was able to show that the placebo effect existed with statins and anti-ulcer drugs by using clinical-trial data. And Anup chose those drugs because they could lead to objective changes. Anup also showed that an objective clinical measure of treatment response — the size of an allergic skin reaction — was affected by people’s expectations about whether the drug they received would be effective. For Ted, he believes that placebos are best for conditions that have subjective symptoms, like pain. It’s another wrinkle in the world of placebo studies that is tough to iron out.

But, back to Ted’s I.B.S. study. The participants were split into two groups — a group that was simply monitored, not given any treatment, and a group that was given a placebo — and they were told that’s what they were taking.

KAPTCHUK: It’s cellulose, like a sugar pill, and they had to sign that they knew they were taking — they may get a placebo, that they were in that arm.

Three weeks later, what did the researchers find?

KAPTCHUK: People who are on the open-label, honest placebo their symptoms of I.B.S. significantly diminished in a way that it’s hard to believe, compared to the no-treatment control.

Kaptchuk and his co-authors replicated this finding across other conditions, including cancer-related fatigue, low-back pain, menopausal-related hot flashes, and migraines.

KAPTCHUK: And we keep getting this consistent finding. This is still preliminary research, but this is an interesting possibility.

Ted and a colleague also recently published a study that compared the effect of a double-blind placebo trial — meaning both the researchers and the patients didn’t know who would be receiving a placebo — to the results from a study where patients knew they were getting a placebo.

KAPTCHUK: And we found that there was no difference between open-label placebo and double-blind placebo. If we look at studies that have high placebo effects, the drug gets thrown out because the placebo effects are almost as big as the drug or the same. It tells us that we may be able to use open-label placebo for a lot of conditions where the placebo effect is biggest. And my contention is the placebo effect is biggest when we’re dealing with subjective complaints or functional disorders.

To Ted, this body of emerging work is a game-changer.

KAPTCHUK: It disrupted placebo theories. Our patients did not expect to get better.

Ted’s work is intriguing and raises a lot of questions — how can a placebo work if someone knows that they’re taking a fake pill? This is such a new area of research and to me it’s exciting, but there are caveats. For example, the studies tend to be small, and the mechanisms of action aren’t entirely clear. Other researchers have pointed out that the open-label placebos may lead people to think that they can replace traditional therapies, though even Ted says that’s not how they are supposed to work. And again, a lot of the research relies on patients self-reporting their feelings and symptoms.

KAPTCHUK: I never used the word proven. “Has demonstrated so far” that there is something here that we need to understand better and examine critically.

But I’m not sure that Ted’s findings and his approach necessarily throw out Anup’s idea that a person’s expectations about a treatment can influence how well that treatment works.

Plus, Ted and others are still trying to figure out the neurological mechanism behind his findings.  In other words, why is this happening? Or, as Anup asked earlier:

MALANI: What can trigger placebo effects?

Ted and others think it might have something to do with how our brains perceive symptoms.

KAPTCHUK: We know physiologically there’s no doubt that for chronic pain, much of it is related to the nerves paying attention too much to pain or interpreting normal sensations, like normal bowel sensations, as painful. The volume is turned up. Placebos turn the volume down. It’s the same mechanism that makes the symptoms and turns it down.

Anup finds this theory intriguing and, actually, not wholly incompatible with his definition of the placebo effect.

MALANI: When you experience pain, it’s not an outcome itself. Your body uses pain as a signal to tell you that you ought to do something. You touch a hot stove; you should pull your hand back, right? And the pain is your signal to your body. Move your hand. Trigger those muscles. And so, one of the things that your body does is it’s got a bunch of pain sensors, but it’s also forming expectations. Because it’s not smart to say, “All right, let the body touch a hot stove and then only pull it back.” It’s good to form expectations and say, “Hey, you shouldn’t touch that hot stove in the first place.” So, you’ve got these two inputs. One is a bottom-up sensory measurement, and the other one is a top-down, your brain doing prediction. And one of the interesting things is that your sensation of pain is, in some sense, a combination of these two things. It’s a combination of what you expect in terms of pain and what you actually experience in terms of pain. That’s a model that suggests expectations are different.

Again, Ted might say that this change in expectations doesn’t have anything to do with what’s causing you to perceive pain differently — whether based on your expectation or the actual sensory experience. But let’s take an example of what Anup is talking about here. Say he gave you a fake pain medication — but you don’t know it’s fake.

MALANI: Your expectations about the pain that you’re going to get from touching the stove is going to be a little bit different. And what that means is that your self-reported pain, not the actually sensory input, but if you’re asked on rating one to 10, “what is your pain?” it might actually be different, not because the pain actually changed, but because your prediction is the pain should be different.

Anup is also trying to untangle another theory about what might be at play with the mechanism behind the placebo effect: stress.

MALANI: If you look at placebo effects across a whole range of medicine, one of the things I noticed was the outcomes where you observe placebo effects from medication versus outcomes where you don’t observe it well, those are also the outcomes where you see stress effects versus outcomes where you don’t see stress effects. That is to say, the physiological outcomes where your stress manifests tend to be also the outcomes where placebo effects seem to occur. And so, one theory about this is something like the following, which is: placebo effects are mediated by stress.

And so, what’s going on when you, for example, take Claritin for an allergic reaction, but then you see a Zyrtec ad, is that the Zyrtec ad, once it tells you that Claritin is not going to work, actually increases your stress. And that increase in stress causes inflammation, which is a stress outcome, which makes it seem as if Claritin didn’t work. And it can do that for a broad range of drugs. So, it’s quite possible that in fact the mechanism here is just stress. And so, all placebo effect is a trigger for stress and stress-mediated outcomes. So, we’re starting to make progress on these fronts. And I think this is where, you know, over the next 10 years, you’ll see a lot of research that’ll be quite interesting.

Ted and Anup may diverge on whether or not our thoughts and predictions about a treatment actually contribute to whether that treatment has any effect. But they do agree on something: placebos work without us having to really do anything. Even if the mechanism behind the placebo effect isn’t fully understood, that doesn’t mean it isn’t useful.

MALANI: We approve a lot of drugs where we don’t know the exact mechanism of action. And so, if there’s a drug that’s mediated by placebo, but it does have an effect, you know, why should that be any different?

So, what can doctors take from the world of placebo research into the exam room? Here’s Ted Kaptchuk again.

KAPTCHUK: There’s no legal and there’s no ethical impediment to adopting it, to treating patients who failed several treatments, that have illnesses that are defined by self-observation, subjective complaints. I wouldn’t use it as a first line of therapy. Because people have this fear of: what does it mean if they get better from placebos? But I think that there’s a place for this that would be useful for how I see doctors practicing.

This is what’s so tricky about the placebo effect: what would you think if the chronic pain you’d been struggling with suddenly disappeared, or got a lot better, because you took a sugar pill? Does that mean the pain you felt wasn’t real? That there was nothing really wrong with you? I don’t think we have a clear answer on that. Not to mention, as we’ve explored throughout this episode, some of the work on the placebo effect is still kind of fuzzy — some of it involves deception, or relying on the statements that participants make to researchers. So, how reliable is this kind of work?

But it’s worth noting that some research suggests that up to 50 percent of doctors use placebos regularly — either actual sugar pills or drugs, like over-the-counter medications or vitamins that they just don’t expect to work. That poses a lot of ethical questions. For example, in the same survey, 62 percent of doctors felt that using a placebo was ethically permissible while only 5 percent explicitly described the pills as placebo to their patients.

I want to wrap up today with a recent, interesting example of how all these issues affect our understanding of vaccines in the Covid-19 pandemic. Ted and several colleagues recently published a new paper in the journal JAMA Network Open that analyzed a dozen studies that looked at adverse side effects of the Covid-19 vaccine, things like fatigue and headaches. They found that among tens of thousands of vaccine trial participants, a third of those who received a fake shot — a placebo — still reported feeling some sort of adverse symptom. How does that happen? For some therapies, it’s hard to know if the side effects are related to the drug — or in this case, the vaccine. But it’s important to get this right, because people are often hesitant to take medications. And it would be great to be able to know whether the side effects they are experiencing are related to the drug or something else.

KAPTCHUK: It would help the medical community, if there was much more mechanistic research available, because that makes people more confident. This is really meant to, from my perspective, give an additional tool to physicians that have patients that really defied their ability to help them successfully.

That’s it for today’s episode of Freakonomics, M.D. Thanks to Anup Malani and Ted Kaptchuk for sharing their expertise with us. Let me know what you think about this episode — or any of our episodes. My email is bapu@freakonomics.com. Thanks for listening.

*          *          *

Freakonomics, M.D. is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and People I (Mostly) Admire. All our shows are produced by Stitcher and Renbud Radio. You can find us on Twitter and Instagram at @drbapupod. Original music composed by Luis Guerra. This episode was produced by Mary Diduch and mixed by Eleanor Osborne. The supervising producer was Tracey Samuelson. We had research assistance from Emma Tyrrell and Alina Kulman. Our staff also includes Alison Craiglow, Greg Rippin, Rebecca Lee Douglas, Morgan Levey, Zack Lapinski, Ryan Kelley, Jasmin Klinger, Lyric Bowditch, Jacob Clemente, and Stephen Dubner. If you like this show, or any other show in the Freakonomics Radio Network, please recommend it to your family and friends. That’s the best way to support the podcasts you love. As always, thanks for listening.

*          *          *

JENA: Po p-uh-lar, Pop-YOU-lar. I made sure I got it — popular, popular.

Read full Transcript

Sources

  • Ted Kaptchuk, professor of medicine at Harvard Medical School and director of the placebo studies program at Beth Israel Deaconess Medical Center.
  • Anup Malani, professor of law at the University of Chicago.

Resources

Comments