Bad Medicine, Part 1: The Story of 98.6 (Ep. 268 Rebroadcast)
Our latest Freakonomics Radio episode is called “Bad Medicine, Part 1: The Story of 98.6 (Rebroadcast).” (You can subscribe to the podcast at Apple Podcasts or elsewhere, get the RSS feed, or listen via the media player above.)
We tend to think of medicine as a science, but for most of human history it has been scientific-ish at best. In the first episode of a three-part series, we look at the grotesque mistakes produced by centuries of trial-and-error, and ask whether the new era of evidence-based medicine is the solution.
Below is a transcript of the episode, modified for your reading pleasure. For more information on the people and ideas in the episode, see the links at the bottom of this post. And you’ll find credits for the music in the episode noted within the transcript.
* * *
We’re taking advantage of August to replay you a special three-part series we did last year, called “Bad Medicine.” Today, Part 1: “The Story of 98.6,” and it starts right now …
We begin with the story of 98.6. You know the number, right? It’s one of the most famous numbers there is. Because the body temperature of a healthy human being is 98.6 degrees Fahrenheit. Isn’t it?
Anupam JENA: So I’m going to take your temperature, if you don’t mind. Just open your mouth and I’ll insert the thermometer.
Jackson BRAIDER: Ah!
The story of 98.6 …
Philip MACKOWIAK: … dates back to a physician by the name of Carl Wunderlich.
This was in the mid-1800s. Wunderlich was medical director of the hospital at Leipzig University. In that capacity, he …
Pretty big data set, yes? Twenty-five thousand patients! And what did Wunderlich determine?
MACKOWIAK: He determined that the average temperature of the normal human being was 98.6 degrees Fahrenheit or 37 degrees centigrade.
This is Philip Mackowiak, a professor of medicine and a medical historian at the University of Maryland.
MACKOWIAK: I’m an internist by trade and an infectious-disease specialist by subspecialty. So my bread and butter is fever.
There’s one more thing Mackowiak is …
MACKOWIAK: I am by nature a skeptic. It occurred to me very early in my career that this idea that 98.6 was normal — and then if you didn’t have a temperature of 98.6 you were somehow abnormal — just didn’t sit right.
Philip Mackowiak, you have to understand, cares a lot about what is called clinical thermometry. And if you care a lot about clinical thermometry, you care a lot about the thermometer that Carl Wunderlich used to establish 98.6.
MACKOWIAK: His thermometer is an amazing key to this story of 98.6.
So you can imagine how excited Mackowiak was when, on a tour of the weird and wonderful Mutter Museum in Philadelphia, the curator told him they had one of Wunderlich’s original thermometers.
MACKOWIAK: I said: “Good heavens, may I see it?” And she said: “Would you like to borrow it?” And I said: “Of course!” I was able to take this thermometer back to Baltimore and do a number of experiments.
The Wunderlich thermometer, Mackowiak realized, was not at all a typical thermometer.
MACKOWIAK: First of all, it was about a foot long, fairly thick stem. It registered almost two degrees Centigrade higher than current thermometers or thermometers of that era.
Two degrees higher — centigrade? Uh oh!
MACKOWIAK: In addition to that, it is a non-registering thermometer, which means that it has to be read while it’s in place. So it would have been awkward to use.
Mackowiak noticed something else about the original Wunderlich research.
MACKOWIAK: Investigating further it became apparent that he was not measuring temperatures either in the mouth or the rectum. He was measuring axillary or armpit temperatures and so that in many ways his results are not applicable to temperatures that are taken using current thermometers and current techniques.
As it turns out, the esteemed Dr. Carl Wunderlich …
MACKOWIAK: … was not the most careful investigator ever to come on the scene.
The more Mackowiak looked into the Wunderlich data, and how the story of 98.6 came to be, the more he wondered about its accuracy. So he set up his own body-temperature study. He recruited healthy volunteers, male and female, and took their temperature one to four times a day, around the clock, for about two days, using a well-calibrated digital thermometer in the patients’ mouths. What did he find?
MACKOWIAK: Of the total number of temperatures that were taken, only 8 percent were actually 98.6. If you believe that 98.6 is the normal temperature, than 92 percent of the time, the temperature was abnormal. Obviously that’s not even reasonable.
In his study, Mackowiak found the actual “normal” temperature to be 98.2 degrees. Not a huge difference — and yet, the whole notion of a “normal” body temperature was looking more and more suspect. Why? A lot of reasons. Temperature varies from person to person, sometimes so much that one person’s normal would nearly register as nearly feverish for another person.
MACKOWIAK: It’s almost like a fingerprint.
Temperature varies throughout the day — it’s roughly one degree higher at night than in the morning, sometimes even more. And an elevated temperature isn’t necessarily a sign of illness:
MACKOWIAK: In women it goes up with ovulation, during the menstrual cycle. The temperature goes up during vigorous exercise and this is not a fever.
And so, Mackowiak concluded …
MACKOWIAK: Looking at a rise in temperature as a reliable sign of infection or disease is inappropriately simplistic thinking.
Inappropriately simplistic thinking. It makes you wonder: if the medical establishment believed for so long in an inappropriately simplistic story about something as basic as normal body temperature — what else have they fallen for? What other mistakes have they made? I hope you’ve got some time; it’s a long list:
Jeremy GREENE: You take a sick person, slice open a vein, take a few pints of blood out of them …
JENA: Drilling holes into people’s skulls.
Vinay PRASAD: It was literally taking someone to hell and back.
Teresa WOODRUFF: It would cause a whole series of malformations and probably a lot of fetal death.
Keith WAILOO: The overuse of a mercury compound.
Evelynn HAMMONDS: The Tuskegee case.
WAILOO: Losing your teeth and having your gums bleed.
WOODRUFF: DES and thalidomide.
PRASAD: We use a cement.
WOODRUFF: Hormone replacement therapy.
WAILOO: The oxycontin and opioid problem.
MACKOWIAK: As a medical historian, it is patently obvious to me that future generations will look at what we’re doing today and ask themselves “What was Grandpa thinking of when he did that and believed that?” They’ll have to learn all over again that science is imperfect and to maintain a healthy skepticism about everything we believe and do in life in general, but in the medical profession in particular.
On today’s show: Part 1 of a special three-part series of Freakonomics Radio. We’ll be talking about the new era of personalized medicine; the growing reliance on evidence-based medicine; and especially — pay attention now, I’m going to use a technical term — we’ll be talking about bad medicine.
* * *
We have a lot of ground to cover in these three episodes: medicine’s greatest hits, the biggest failures, where we are now and where we’re headed. In the interest of not turning a three-part series about bad medicine into a twenty-part series, we’re not even going to touch adjacent fields like nutrition and psychiatry. Maybe another time. Let’s start, very briefly, at the beginning. Nearly 2,500 years ago, you had the Greek physician Hippocrates, who’s still called the “father of modern medicine.” You’ve heard, of course, of the Hippocratic Oath, the creed recited by new doctors.
And you know the Oath’s famous phrase — “First, do no harm.” Even though, as it turns out, that phrase isn’t actually included in the Oath. It came from something else Hippocrates wrote. Nor do many contemporary doctors recite the original Hippocratic Oath; there’s a modern version, written in 1964, by the prominent pharmacologist Louis Lasagna. The pledge begins: “I swear to fulfill, to the best of my ability and judgment, this covenant.” It’s a fascinating, inspiring document — and I think before we go too far, it’s worth hearing some of it …
Louis Lasagna adaptation of the Hippocratic Oath: “I will respect the hard-won scientific gains of those physicians in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow. I will remember that there is art to medicine as well as science, and that warmth, sympathy, and understanding may outweigh the surgeon’s knife or the chemist’s drug. I will not be ashamed to say “I know not,” nor will I fail to call in my colleagues when the skills of another are needed for a patient’s recovery.
Above all, I must not play at God. I will remember that I do not treat a fever chart, a cancerous growth, but a sick human being, whose illness may affect the person’s family and economic stability. My responsibility includes these related problems, if I am to care adequately for the sick. I will prevent disease whenever I can, for prevention is preferable to cure. May I long experience the joy of healing those who seek my help.”
It’s comforting to think about the thoughtfulness, the nuance — the massive responsibility — that doctors pledge before they attempt to diagnose or heal us. How well has that pledge been upheld throughout medical history? We’ll talk to a variety of people about that today, starting with this gentleman.
JENA: My name is Anupam Jena. I’m a healthcare economist and physician at Harvard Medical School.
So Jena, as both a practitioner and an analytic researcher, is especially useful for our purposes. Because one of the themes we’ll hit today, several times, is that medicine, even though it’s scientific, or at least scientific-ish, hasn’t always been as empirical as you might think — and sometimes, not very empirical at all.
DUBNER: Here is an easy question: can you tell me please the history of medicine, or at least Western medicine in three or four minutes?
JENA: Let me first answer the meaning of life.
DUBNER: Is that going to be easier?
JENA: That’ll take about five to six minutes. How about three words: trial and error. If you think about medicine and how it has evolved — let’s just say in the last 100 to 200 years — the practices that at some point in history people thought were actually medically legitimate included drilling holes into people’s skulls, lobotomies. Even as late as in the 1940s – 1950s, lobotomies were thought to actually have a treatment effect in patients with mental illness, be it schizophrenia or depression.
The practice of bloodletting, which is basically trying to remove the “bad humors” from the body was thought to be therapeutic in patients. Things like mercury, which we know is downright toxic, were used as treatments in the past. That was in a time and place when it was very difficult to get evidence. Not only that, there was probably a perception of the field that didn’t allow for the ability to question itself.
In the last 50+ years, probably 50 to 75 years, we’ve seen tremendous strides in the ability of the profession to constantly question itself.
DUBNER: It’s easy to get indignant over the idea of these treatments that turned out to be so wrong. But understanding wellness and illness is hard, obviously. When you look back at the history of medicine, do those interventions strike you as shameful — you can’t believe you’re in the profession that tried things like that — or is that just part of the trial-and-error process that you accept?
JENA: I certainly wouldn’t call it shameful. The only thing that’s shameful is when someone doesn’t believe that they have the potential for being wrong and they don’t have that desire to inquire further about whether something actually works or doesn’t work. But the idea of trying things, particularly trying things that have a really strong plausible pathophysiologic basis, there is nothing wrong with that. In fact, that’s what spurred scientific discovery and many of the treatments that we have now.
DUBNER: I have a broad question for you: the human body is and extraordinarily complex organism. Over history, doctors and others have learned a great deal about it. But if we consider the entire human body — from the medical perspective only, let’s leave out metaphysics and theology and what have you — how would you assess the share of the body and its functions that we truly understand and the share that we don’t really yet understand?
JENA: That’s a tough one. We’ve made a lot of headway, but to put a number on it … I would say maybe 30 percent, 40 percent that we don’t know.
GREENE: That’s a tough question for me to quantify.
I asked the same question of someone else.
GREENE: My name is Jeremy Greene. I’m a physician and a historian of medicine at Johns Hopkins.
So what’s Greene’s answer?
GREENE: There is a Rumsfeldian answer of the known knowns, known unknowns and unknown unknowns. A different way of answering that question would have to do with what the idea of relevant science of medicine is.
GREENE: For example, the moment in Renaissance, the Vesalian moment: the opening of cadavers, and [describing] and rendering precise three-dimensional chiaroscuro engravings of the human body was an exciting area for research that actually this humanist process of opening up cadavers, showing that the innards were not exactly what the ancient Greeks had described. As a historian, rather than giving you a fixed percent of where we are, I can give you a Zeno’s paradox that we keep on getting close to that finite moment and then reinvent a new broader room for us to inhabit.
And that’s because there’s been a lot of progress in how we’re able to explore the human body.
JENA: There is the gross anatomy of the body, which you can see with your own eyes.
Anupam Jena again:
JENA: Then go a layer further and we’re now at the microscopic anatomy of the body. What do the cells of the body look like when they are diseased under a microscope?
And now …
JENA: Now go a layer further where you are now trying to understand things about the body that you can’t even see with the microscope. That’s at, let’s say, the level of the proteins in the cell, or even further down, the level of the DNA that encodes that protein.
GREENE: By the end of the 20th century, there’s a very strong genetic imaginary, which really helps to then fuel the excitement behind The Human Genome Project. It’s thought once we know the totality of the human genome, we’ll know all we need to know about bodies and health and disease.
Of course we already know a great deal. And, to be fair, for all the mistakes and oversights in medicine, there’s been extraordinary progress. What are some of medicine’s greatest hits?
HAMMONDS: I’m sure every historian of science medicine would give you a different set of hits.
That’s Evelynn Hammonds. She’s a professor of the history of science and African-American studies at Harvard.
HAMMONDS: The ones that I typically think about are the introduction of more efficacious therapeutics and medicines.
WAILOO: I would put something like the discovery of insulin right up there near the top.
That’s Keith Wailoo. He’s a Princeton historian who focuses on health policy.
WAILOO: It transformed diabetes from an acute disease into a disease that you live with. To me, that is much more the story of what medicine has been able to do in the 20th century.
JENA: The medicine that comes to my mind is statins. They’ve been shown to have benefit in preventing heart attacks and prolongation of life among people who have had heart attacks and the same thing for stroke and other forms of cardiovascular disease. But there are many, many drugs that are like that.
These are, truly, awesome interventions, for which we should all be thankful. One of the most remarkable developments over the past century and a half is the unbelievable gain in life expectancy: in the U.S., and elsewhere, it nearly doubled! It might be natural to ascribe that gain primarily to breakthrough medicines. But in fact a lot of it had to do with something else.
WAILOO: A lot of the advances in mortality and morbidity have come from, really, changes in the nature of social life. Infectious disease as the source of high mortality in the early 20th century began to drop long before penicillin and the antibacterials came along in the mid-century because of improvements in housing, sanitation, diet, and [the] tackling [of] urban problems that really created congestion and produced the circumstances that made things like tuberculosis the leading cause of mortality.
HAMMONDS: For example, if you think about the reversal of the Chicago River — it used to flow into Lake Michigan, in the 19th-century. People were dumping their waste into it, and every summer, there would be hundreds of deaths of babies and children from infant diarrhea because the water was so contaminated. They reversed the flow of the river so it flowed downriver towards the Mississippi. That significantly improved the health of the people who lived there.
So we’ve got public-health improvements to thank. And yes, better therapeutics and medicines. Also: new and better ways of finding evidence.
PRASAD: The technology that really revolutionized how we think is the use of controlled experiments.
That’s Vinay Prasad. He’s an assistant professor of medicine at Oregon Health & Science University. Prasad treats cancer patients. But also:
PRASAD: The rest of my time I devote to research on health policy, on the decisions doctors make, on how doctors adopt new technologies, and when those things are rational and when they’re not rational.
Which means that Prasad is part of a relatively new, relatively small movement to make medical science a lot more scientific:
Even though medical science seemed to be based on evidence, Prasad says …
PRASAD: The reality was that what we were practicing was something called eminence-based medicine. It was where the preponderance of medical practice was driven by really charismatic and thoughtful leaders in medicine. Medical practice was based on bits and scraps of evidence, anecdotes, bias, preconceived notions, and probably a lot psychological traps that we fall into. Largely from the time of Hippocrates and the Romans until maybe even the late Renaissance, medicine was unchanged.
It was the same for 1,000 years. Then something remarkable happened which was the first use of controlled clinical trials in medicine.
Coming up on Freakonomics Radio: how clinical trials began to change the game.
PRASAD: It really doesn’t matter that the smartest people believe something works. The only thing that really counts is what is the evidence you have that it works.
How some people didn’t have much of an appetite for actual evidence:
CHALMERS: There was a great deal of hostility to it from the medical establishment
And, in a strange twist, how better science is pushing medicine not always forward, but sometimes backwards:
* * *
JENA: All right, take a deep breath through your mouth, in and out. Good, okay. One more.
Anupam Jena is an M.D. and a healthcare economist.
JENA: I’m going to lift up your shirt and listen to your heart.
In most developed countries, we tend to think of medicine as a rigorous science, and of our doctors as, if not infallible, at least reliable.
JENA: The typical patient probably does look to their doctor for answers and they value very highly what that opinion is.
But as we’ve been hearing, the history of medical science was often “eminence-based” rather than “evidence-based.” When did evidence really start to take over?
The movement is a result, Jena says, of at least two factors: Number one:
JENA: We’re doing more randomized controlled trials and that tells us more information about what works and doesn’t work.
And, number two:
JENA: Improvements in computer technology have now allowed us to study data in a way that we couldn’t have done 30 years ago.
There’s also been a movement to collect and synthesize all that research and all those data:
Lisa BERO: Our vision is to produce systematic reviews that summarize the best available research evidence to inform decisions about health.
That’s Lisa Bero, a pharmacologist by training, who studies the integrity of clinical and research evidence.
BERO: I’m also a co-chair of the Cochrane Collaboration.
The Cochrane Collaboration was founded in Britain but is now a global network. The “systematic reviews” they produce …
BERO: … are really the evidence base for evidence-based medicine. We’ve been a leader in so many ways in developing systematic reviews. We were the first to regularly update these reviews. We were one of the first to have post-publication peer review and a very strong conflict-of-interest policy. Actually, we were one of the first journals that was published only online.
Which means that whatever realm of medical science you’re working on, you can access nearly all the evidence on all the research ever conducted in that realm — constantly updated, available on the spot. Compare that to how things used to work — looking up some 5- or 10-year-old medical journal to find one relevant article that may well have been funded by the pharmaceutical company whose drug it happened to celebrate. How is Cochrane funded?
BERO: We are primarily funded by governments and nonprofits.
What about industry money?
BERO: We don’t take any money from industry to support any official Cochrane groups.
Which means, in theory at least, that the evidence assembled by the Cochrane Collaboration is pretty reliable evidence. As opposed to …
Iain CHALMERS: … a whole variety of things. Opinion. What the doctor had been taught 30 years previously in medical school. Tradition. What they had been told to do by, or advised to do, by a drug-company representative that had visited them a week previously.
That is Sir Iain Chalmers, who co-founded the Cochrane Collaboration. He’s a former clinician who specialized in pregnancy, childbirth, and early infancy. He was a medical student in the early 1960s. When Chalmers observed his elders in practice, he was struck by how much variance there was from doctor to doctor.
CHALMERS: Some doctors — if a woman had a baby presenting by the breach — would do a Caesarean section, without any questions asked. Or they may take different views about the way the baby should be monitored during labor. Or the extent to which drugs should be used during pregnancy for one thing or another. Lots and lots of differences in practices. It’s as long as your arm. It’s madness isn’t it?
When he became a doctor himself, Chalmers worked at a refugee camp in Gaza. And, as he discovered …
CHALMERS: … some of the things that I had learned at medical school were lethally wrong.
Like how you were supposed to treat a child with measles.
CHALMERS: I had been taught at medical school never to give antibiotics to a child with a viral infection, which measles is, because you might induce resistance, antibiotic resistance. But these children died really quite fast after getting pneumonia from bacterial infection, which comes on top of the viral infection of the measles. What was most frustrating was that it wasn’t until some years later that I found that there had been six controlled trials comparing antibiotic prophylaxis given preventatively with nothing done by the time I arrived in Gaza.
And those studies suggested that children with measles should be given antibiotics. But Chalmers had never seen those studies.
CHALMERS: I feel very sad that in retrospect I let my patients down.
This led Chalmers to embark on a years-long effort to systematically create a centralized body of research to help attack the incomplete, random, subjective way that too much medicine had been practiced for too long. He was joined by a number of people from around the world — many of whom, by the way, were more versed in statistics than in medicine.
CHALMERS: We embarked on these systematic reviews, about 100 of us. That resulted, at the end of the 1980s, in a massive, two-volume, one-and-a-half-thousand-page book. At the same time, we started to publish electronically.
And so the Cochrane Collaboration became the first organization to really systematize, compile, and evaluate the best evidence for given medical questions. You’d think this would have been met with universal praise. But, as with any guild whose inveterate wisdom is challenged, as unwise as that wisdom may be, the medical community wasn’t thrilled.
CHALMERS: There was a great deal of hostility to it from the medical establishment. In fact, I remember a colleague of mine was going off to speak to a local meeting of the British Medical Association, who had basically summoned him to give an account of evidence-based medicine. “What the hell did people who were statisticians and other non-doctors think they were doing messing around in territory which they shouldn’t be messing around in?” He asked me before he drove off, “What should I tell them?”
I said, “When patients start complaining about the objectives of evidence-based medicine, then one should take the criticism seriously. Up until then, assume that it’s basically vested interests playing their way out.”
It took a long while, but the Cochrane model of evidence-based medicine did become the new standard.
CHALMERS: I would say it wasn’t actually until this century. One way you can look at it is where there is death, there is hope. As a cohort of doctors who rubbished it moved into retirement and then death, the opposition disappeared.
PRASAD: That’s been the slower evolution.
That, again, is Vinay Prasad, from Oregon Health and Science University.
This was in the late 1940s.
PRASAD: From then the end of the 1980s, we did use randomized trials but they weren’t mandatory. They were optional.
One big benefit of a randomized trial is that you can plainly measure, in the data, the cause and effect of whatever treatment you’re looking at. This may sound obvious but it is remarkable how many medical treatments of the past were conducted without that evidence. Anupam Jena again:
JENA: Some of the biggest mistakes in the last century, let’s say from 1900 to 1950 — things like lobotomy used to treat mentally illness, either depression or schizophrenia — those strike me as being some of the most horrific things that could be done to man without any really solid evidence base at all.
This is one of the trickiest things about practicing medicine day-to-day. Let’s say you’re a doctor, and a patient comes to see you with a persistent headache. You make a diagnosis, and you write a prescription. What happens next? In many cases, you have no idea. The feedback loop in medicine is often very, very sloppy. Did the patient get better? Maybe. They never came back. But maybe they went to a different doctor. Or maybe they died? If they did get better, was it because of the medicine you prescribed? Maybe.
Or maybe they didn’t even fill the scrip. Or maybe they did fill the scrip but stopped taking it because they got an upset stomach. Or maybe they did take the medicine and they did get better but … maybe they would have gotten better without the medicine? Like I said, you have no idea. But with a well-constructed randomized controlled trial, you can get an idea. Vinay Prasad again:
PRASAD: The moment that set us on different course was a study called CAST.
CAST stands for Cardiac Arrhythmia Suppression Trial. It was conducted in the late 1980s.
PRASAD: One of the things doctors were doing a lot for people after they had a heart attack was prescribing them an antiarrhythmic drug, that was supposed to keep those aberrant rhythms, those bad heart rhythms, at bay. That drug actually, in a carefully done randomized trial, turned out not to improve survival as we all had thought, but to worsen survival. That was a watershed moment where people realized that randomized trials can contradict even the best of what you believe.
It really doesn’t matter in medicine that the smartest people believe something works. The only thing that really counts at the end of the day, is what is the evidence you have that it works.
The rise of randomized controlled trials led to a rise in what are called medical reversals. Vinay Prasad wrote the book on medical reversals, literally. It’s called Ending Medical Reversal.
PRASAD: What is a medical reversal? Doctors do something for decades, it’s widely believed to be beneficial, and then one day, a very seminal study — often better-designed, better-powered, better-controlled than the entirety of the pre-existing body of evidence — contradicts that practice. It isn’t just that it had side effects we didn’t think about. It was that the benefits that we had postulated, turned out to be not true or not present.
For instance …
PRASAD: In the 1990s we would recommend to postmenopausal women to start taking estrogen supplements, because we knew that women before they had menopause had lower rates of heart disease, and we thought that was because of a favorable effect of estrogen. And then in 2002, a carefully done randomized control trial, found that actually, it doesn’t decrease heart attacks and strokes; in fact, if anything it increases them.
I asked Prasad what first got him interested in studying medical reversal.
PRASAD: I started to get interested in this even when I was a student, and I saw that there [were] some practices that had been contradicted just in the recent past but were still being done day in and day out in the hospital. The example that comes to mind is the stenting for stable coronary angina. A stent is a little foldable metal tube that goes in a blocked coronary artery and the doctors spring it open, and it opens up the blockage.
Stents are incredibly valuable for certain things. If you have a heart attack and there’s a blockage that just happened a few minutes ago, and the doctor goes in and opens that blockage up, we’re talking about a tremendous improvement in mortality, one of the best things we do in medicine. But stenting, like every other medical procedure, has something called indication drift where it works great for a severe condition, but does it work just as good for a very mild condition?
Over the years, doctors has used stenting for something called stable angina. Stable angina is just slow, incremental, narrowing of the arteries that happens to sadly all of us as we get older. But the bulk of stenting was this indication drift. We thought it worked and made perfect sense. Then in 2007, a well-done study showed that it didn’t improve survival, and didn’t decrease heart attacks, which were, even to this day studies show that most patients who undergo this procedure believe it will do those things.
In fact, it’s been disproven for eight years.
And yet: while stenting for stable angina did decline, it didn’t disappear. The rate of inappropriate stenting, Prasad says, is still way too high. This obviously starts getting into doctors’ incentives — financial and otherwise — and we’ll get into more in Parts 2 and 3 of this series. As Prasad makes clear, there’s a long, long list of medical treatments that simply don’t stand up to empirical scrutiny. Some common knee surgeries, for instance, where orthopedic surgeons take a tiny camera …
PRASAD: … take a tiny camera, make a tiny incision, and go in there, and actually debride and remove those scuffed and scraped knees. In fact, people felt a lot better. They had improved range of motion. There’s no argument there. But you’ve studied against just taking ibuprofen, or maybe just doing some physical therapy … What if you studied it against making the patient believe that you were doing the surgery, but you don’t actually do it?
In fact, they’ve done those studies. Those are called “sham” studies. We give the appearance that we’re going to do this procedure. The only thing we omit is actually the debridement of the menisci and the cartilage. In fact, when you do it that way, you find that the entire procedure is a placebo effect. There’s another example where we use a cement that we inject into a broken vertebral bone. That, again, was found to be no better than injecting a saline solution in a sham procedure.
The cement itself cost $6,000, and I said, “At a minimum you can save yourself $6,000, and you don’t need to use the cement.”
DUBNER: What would be the incentives for me to do the study that might result in a reversal? Because we know how publishing works — whether it’s in your field, in any academic field, or in the media as well — it’s the juicy, sexy, new findings that get a lot of heat. It’s the maintenance articles, or the reversal articles, that nobody wants to hear about. I would gather there are fairly weak incentives to doing the studies that would result in reversals — which also makes me wonder if there is a woeful undersupply of such studies, which means there probably would be even more reversals then there are.
PRASAD: Yeah. That’s a fantastic question. One of the things that we did in the course of our research was we took a decade worth of articles, [from] probably one of the most prestigious medical journals, The New England Journal of Medicine. There was about 1,300 articles that concern things that doctors do. About 1,000 of those articles were something new that’s coming down the pipeline, the newest anticoagulant, the newest mechanical heart valve.
If you tested something new — exactly as you’d expect, 77 percent of those published manuscripts concluded that what’s newer is better. But we also discovered about 360 articles tested something doctors were already doing. If you tested something doctors were already doing, 40 percent of the time, we found that it was contradicted or, a reversal.
DUBNER: I’d love for you to talk about the various consequences of reversals, including perhaps a loss of faith in the medical system generally.
PRASAD: If you find out something you were doing for decades is wrong you harmed a lot of people, you subjected many people to something ineffective, potentially harmful, certainly costly, and it didn’t work. The second harm we say is this lag-time harm. Doctors, we’re like a battleship. We don’t turn on a dime. We continue to do it for a few years after the reversal. The third is loss of trust in the medical system. We’ve seen it in the last decade, particularly with our shifting recommendations for mammography and for prostate cancer screening.
People come to the doctor and they say, “You guys can’t get your story straight. What’s going on?” It’s a tremendous problem. I’m afraid that we are making people feel like that there’s nothing that the doctor does that’s really trustworthy. I’m afraid that that’s the deepest problem that we’re faced, this loss of trust.
DUBNER: Okay, so how do you not throw out the baby with the bathwater? What are some solutions to a practice of medicine and medical research that results in fewer reversals?
PRASAD: That is a million-dollar question. One is medical education. We have a medical education where for two years, students are trained in the basic science of the body. Only in the latter years, the third and fourth year of medical school, are students trained in the epidemiology of medical science, evidence-based medicine, in thinking not just how does something work, but what’s the data that it does work? I’ve argued that needs to be flipped on its head. That the root, the basic science of medical school is evidence-based medicine.
It’s approaching a clinical question knowing what data to seek, and how to answer that in a very honest way. That’s one. The next category is regulation. This is where you get into, “What is the FDA’s role, and what does the FDA do?” Many people in the community hope that products that are approved by the FDA are both safe and efficacious for what they do. But we were faced with a problem in the ‘80s and ‘90s that we had never faced before, which was the HIV/AIDs epidemic. Advocates rightly said that we need a way to get drugs to patients faster, maybe even accepting a little bit more uncertainty.
I think that was right and that’s still right for many conditions that are very dire, for which few other treatment options exist, and, which sometimes have very low incidence, so it’s very hard to do those studies because very few people have it. But what’s happened is that mechanism has been extrapolated to conditions that are not dire, that have very good survival, that don’t have few options, have many options, and that many people do have. We’ve had, again, a slippery slope for what qualifies for this accelerated approval.
There [are] ways in which regulation can be adjusted. Then, the last thing is the ethic of practicing physicians. We have to have an ethic where when we offer something to someone, and there’s uncertainty, we should be very clear about communicating uncertainty. It’s a tragedy today that no matter what you think of stenting for stable coronary artery disease, that so many people who are having it done believe something that is clearly not true, that it lowers the rate of heart attacks and death.
That’s just factually not true, and the fact that many people believe that speaks to the fact that, as doctors, we allow them to believe it.
DUBNER: Let me ask you one last question: I have a pretty good sense, of having spoken to you for a bit, of what has prevented in the past medicine from being more scientific or more evidence-based, but what do you believe are the major barriers still that are still preventing it from becoming as evidence-based as you want it to be?
PRASAD: We should be honest about what medicine is. In the United States, medicine is something that now takes, nearly or over 20 percent of G.D.P. It’s a colossus in our economy. We spend more on medicine than any other Western nation. We probably don’t get as much from what we’re spending. Because it’s such a large sector of the economy, the entrenched interest for the companies and the people who really profit from the current system are tremendously reluctant to change things.
We see that with, just for one instance, the pharmaceutical drug-pricing problem we’re having right now. No one will doubt that the pharmaceutical industry has made some great drugs. They’ve also made some less-than-great drugs. But does every drug, great or worthless, have to cost $100,000 per year? I [didn’t] invent that number. That’s actually the cost per annum of the average cancer drug being approved in the United States in the last year — well over $100,000 per year of treatment.
There’s got to be a breaking point and people are recognizing that.
Next week on Freakonomics Radio, Part 2 of “Bad Medicine,” how do those great drugs, and the less-than-great ones too, get made, and then how do they get to market? We’ll look into the economics of new-drug trials and how carefully the research subjects are chosen:
Ben GOLDACRE: Now that’s very useful for a company that are trying to make their treatment look like it’s effective, but does the population of people in this randomized trial really reflect the real-world people out there?
We look at who’s been left out of most clinical trials:
WOODRUFF: It suggested that women shouldn’t be included in clinical trials because of the potential adverse events to the fetus.
And how sometimes, the only thing worse than being excluded from a medical trial was being included:
HAMMONDS: The use of vulnerable populations of African- Americans, people in prison, children in orphanages — vulnerable populations like these had been used for medical experimentation for a fairly long time.
That’s next time, on Freakonomics Radio.
Freakonomics Radio is produced by WNYC Studios and Dubner Productions. This episode was produced by Stephanie Tam. Our staff also includes Alison Hockenberry, Merritt Jacob, Greg Rosalsky, Eliza Lambert, Emma Morgenstern, Harry Huggins and Brian Gutierrez. You can subscribe to Freakonomics Radio on Apple Podcasts, Stitcher, or wherever you get your podcasts. You can also find us on Twitter, Facebook, or via e-mail at firstname.lastname@example.org.
Here’s where you can learn more about the people and ideas in this episode:
- Anupam Jena, health care economist and physician at Harvard Medical School
- Philip Mackowiak, professor or medicine and medical historian at the University of Maryland
- Jeremy Greene, physician and historian of medicine at Johns Hopkins University
- Evelynn Hammonds, professor of the history of science and African-American studies at Harvard University
- Keith Wailoo, health policy historian at Princeton University
- Vinay Prasad, assistant professor of medicine at Oregon Health & Science University
- Lisa Bero, pharmacologist and co-chair of the Cochrane Collaboration
- Sir Iain Chalmers, co-founder of the Cochrane Collaboration
- Ending Medical Reversal, Vinay Prasad, 2015, Johns Hopkins University Press
- The Cochrane Collaboration
- “A Critical Appraisal of 98.6F, the Upper Limit of the Normal Body Temperature, and Other Legacies of Carl Reinhold August Wunderlich,” Philip Mackowiak, Steven Wasserman and Myron Levine, 1992, University of Maryland
- Effective Care in Pregnancy and Childbirth, Sir Iain Chalmers, Murray Enkin and Marc Keirse, 1989, Oxford University Press
- “A Decade of Reversal: An Analysis of 146 Contradicted Medical Practices,” Vinay Prasad, et al., 2013, Mayo Clinic
- “Mortality and Morbidity in Patients Receiving Encainide, Flecainide, or Placebo: The Cardiac Arrhythmia Suppression Trial,” Debra Echt, et al., 1991, New England Journal of Medicine
- “Optimal Medical Therapy with or without PCI for Stable Coronary Disease,” William Boden, et al., 2007, New England Journal of Medicine
- Paul Avgerinos, “Times a Tickin”
- Jack Miele, “Otis Theme” (from Jack Miele)
- Christopher Norman, “Emerald” (from Strange Games)
- Paul Avgerinos, “Ladies Day”
- Nicholas Pesci, “Feeling Quirky” (from All The Feelings)
- Baba Brinkman, “Seed Pod” (from The Rap Guide)
- Morella and the Wheels of It, “Vincent” (from Shipwrecked)
- Lerin Herzer and Andrew Joslyn, “Roots” (from The Girl and the Ghost)
- Judson Lee Music, “Snoopin’”
- Mike Barresi, “It’s All Good” (from Mike Barresi)
- Additional Scoring by Jay Cowit