“Engagement improves our odds.”
Taking Control of Your Health: A Q&A With the Author of The Decision Tree
For much of human history, if you came down with a fever (or pretty much anything else) your doctor would “bleed” you – and most likely go on his way without washing his hands. While technology has drastically increased our understanding of human anatomy and disease since those days, many useful technologies are never applied to health care — or, worse, drive health care costs ever higher while offering dubious benefits. We have more data than ever about our own health and the causes of illness, yet assimilating those data to make intelligent, cost-effective healthcare decisions can seem impossible.
Thomas Goetz, the executive editor of Wired, has written a new book that he hopes will give people some tools in this effort. The Decision Tree: Taking Control of Your Health in the New Era of Personalized Medicine tackles everything from genetic testing and pharmacogenomics to the challenges of long-term behavioral change. Goetz has agreed to answer some of our questions about his book:
What exactly is a decision tree, and how can it change the way people make health care decisions?
One of the biggest challenges in healthcare is how to give individuals a way to make better decisions – a way for them to engage with the huge amount of information that our doctors or care providers or the Internet offers us.
Helping people make good decisions is essential, because research shows that when people engage in their health, they tend to have better outcomes. Engagement improves our odds. This is true when people are trying to engage in preventive medicine, when they’re facing a diagnostic test, or when they’re weighing various drugs or treatments. At all of these moments, we are trying to navigate our way to a good choice, a choice that corresponds with our values and gives us our best chance for a desired outcome. But this is an incredibly difficult process for nearly all of us, because the information comes in so many unfamiliar forms – genetic results, statistical probabilities, side effects – and there is so much uncertainty in the equation. So we often leave decision-making up to our doctors, or we don’t really engage in the information in a way that might otherwise improve our outcomes.
As I was writing the book, researching all these new tools for engaging with information in more meaningful ways, I realized that there was a way to visualize this whole approach of engaged, patient-centric healthcare. We can give people a structure for thinking about their options, for thinking about how one choice leads to or precludes others, and then perhaps people will have an easier time taking advantage of all the superb progress that modern medicine can provide. That’s what led me to the notion of a decision tree. A decision tree, essentially, is a flow chart – it’s a kind of algorithm, a way to factor in various strands of information and probabilities to move towards an optimal outcome. So it’s a way of thinking about our healthcare as a series of deliberate choices, where we are the primary decision-makers. When we take on that role, we tend to have better health.
What role should genetic testing play in medicine?
The experts that I talk to in the book are incredibly hopeful about how our growing knowledge of genetics might improve medical diagnosis and treatment. But it’s important to realize that, when it comes to disease, our DNA is not entirely our fate. Yes, there are certain diseases that are entirely genetic, but for the vast majority of diseases, genetics only play an influential factor alongside our environmental exposures and behaviors (basically, all the stuff that we ourselves do). To me, this means two things: First, DNA isn’t something we should be fearful of; it’s not destiny, it’s simply an influence that we should consider and try to understand. And secondly, this means that DNA is a welcome “baseline” – a starting place for us to understand what we may want to be mindful of, down the line. Knowing our genomes can help us flag risks so that we can then deploy other behaviors – diet, drugs, what have you – to navigate away from those risks. As genetic sequencing gets cheaper, this is where I think DNA testing will end up: a test at birth that is used alongside our blood type as a dataset that may be revealing but not decisive. It’ll be one more arrow in the medical quiver that can help our doctors (and us, too) understand what might be going on in our bodies.
Behavior change is hard, yet it’s often a requirement for improved health. What are the secrets to successful behavior change?
I wish it was as easy as spilling the secrets. There’s been no shortage of seven or seventeen things that will change your life, and I won’t pretend I have my own set. But I do know that behavior change is probably the single greatest challenge in healthcare today – so many of the things that people suffer from and die of are based in our habits or antics, from smoking to diet to sloth. But that also makes behavior change a great opportunity, because if we can get people to embrace change and succeed, then you’re really going to improve and save lives.
So here’s what I did find: there are two essential things that work to get people to change. First is feedback – giving people some sense of where they stand, and a personal understanding of where they have to go. Research shows that when people can see their personal progress, they tend to have higher rates of success at changing their behaviors. The second thing that works is communities or groups. When we are one among many, all working on the same goal, people tend to have much better success. In part this is because we can compare ourselves against our peers; but it’s also about communication and confidence.
These two principles have been well known in public health circles for decades; they’re the magic behind everything from Weight Watchers (which really does work) to smoking cessation. They work, but they take significant resources and discipline. But here’s the thing: the Internet and information technologies are now making these strategies less resource intensive. Feedback can be as easy as clicking a few buttons on your iPhone, and “communities” is just another word for “social networks.” This is the moment we’re at today, and I think we’ll start to see some remarkable innovation in making these strategies more available to more people.
Now that we have screening tests for so many diseases, why don’t we just screen everybody for every disease?
Short answer: False positives would overwhelm true positives. Long answer: Screening tests are a terrific strategy for spotting disease early, and saving lives. They’re the reason cervical cancer is no longer a leading cause of death among women – thank you, Dr. Papanicolaou for the Pap smear – and they’re the reason the death rate from heart disease is only a third of what it was in 1950 (taking somebody’s blood pressure is a very simple sort of screening test).
But a screening test requires a calibration between the cost of the test, the accuracy of the diagnosis, and the population of people at risk. The calibration results in a kind of arithmetic: If a test is very cheap and very accurate – like a blood pressure test – then by all means it should be widely deployed, pretty much universally. So it’s suggested that everyone over 18 get a cholesterol test. But as a test gets more expensive and more intensive, we need to limit the population. So a colonoscopy is a superb way to detect colon cancer early, but experts only recommend that men over 50 get the test. Even if you put cost aside, it’s unrealistic to give every test to every person because we’d lose the positive signals in the noise of false positives – those people who the test says have a disease, but actually don’t. False positives, of course, are a normal result of screening, and ideally any positive test will be followed up with a second, more precise (and probably more expensive) test. But unless we limit the number of people tested to begin with – unless we choose the right pool to begin with – even the best test will generate more false positives than the system can handle.
How effective are drugs? Is there any way we can make drugs more effective?
One of the amazing things I learned in writing the book is that many popular drugs don’t really work all that well. Anti-depressants work at best around half the time; chemotherapies for cancer work at best 20% of the time. This is the hit and miss of modern pharmacology – there are no sure things. We can improve the efficacy of drugs in two ways. From one direction, we can hunt for more effective drugs. That’s been the path the pharmaceutical industry has pursued for the past 50 years, but it’s pretty much run out of steam. The other way to improve efficacy is to only give a drug to the people you know it’ll work for. So if Gleevec works for less than 10% of leukemia cases, identify what those 10% have in common – turns out it works almost perfectly for Chronic Myeloid Leukemia — and only give the drug to those people.
This idea of drug targeting (or pharmacogenomics, as it’s called) has been around for decades, but the industry has been slow to move, loathe as it’s been to leave the blockbuster model behind. But that’s starting to change, and the hope is that in the next decade there’ll be more development of drugs that work for smaller populations.
In the (increasingly unlikely) event that healthcare reform makes a comeback, what are the three reforms you’d most like to see?
I still hold out hope for healthcare reform, but three things that we should insist upon whether in a big bill or smaller legislation are:
- more evidence-based medicine – which involves things like comparative effectiveness research, so newer (read: expensive) drugs are compared to older (read: cheaper) drugs
- more pay-for-performance targets for physicians, so our doctors have greater incentive to pursue prevention, and have ways to get compensated for keeping us healthy (rather than waiting for treatments)
- a revised HIPAA privacy rule, so that it’s easier for doctors to share patient data with the patient, and its easier for patients to share their data with each other. These three things would go a long way towards creating a smarter, more participatory kind of healthcare.