Episode Transcript
Are you an idea junkie? I am. And since you listen to a show like this, you probably are too. It’s exciting to hear about ideas, especially new ones. There’s a progression that happens when you hear a new idea. You run it through your brain, try to envision where an idea might lead. Who’ll benefit from it? Who’ll it hurt? Will it be worth the cost? Is it legal? Is it morally defensible? Is it, in fact, a good idea? Today’s episode is about ideas, but we run that progression in reverse. Rather than asking if a new idea is a good one, we ask — well here, you can tell from the answers what we ask:
Seth LLOYD: The idea that I believe is ready to retire —.
Steven LEVITT: An idea that is really bad, that’s detrimental to society, is the idea that —.
Douglas RUSHKOFF: The scientific idea I believe is ready for retirement is the atheism —.
That’s right, we are asking a bunch of people to name an idea that should be killed off. An idea that is commonly accepted but which in fact is impeding progress. Would you like a for instance?
Sarah-Jayne BLAKEMORE: My lab’s research focuses on the development of the adolescent brain.
That’s Sarah-Jayne Blakemore. She’s a professor of cognitive science at University College London. The idea she’d like to kill off is the idea that people are either right-brained or left-brained.
BLAKEMORE: When people say “left-brained,” apparently what they tend to mean is a mode of thinking which is more logical, analytical and accurate. Whereas “right-brained” people tend to be more creative, intuitive, emotional and subjective.
Like most of the ideas we’ll be discussing today, this one is exceedingly popular.
BLAKEMORE: It sells a lot of self-help books. Businesses use it. Even scientific studies sometimes employ this idea of left brain/right brain, for example, with regards to gender differences or creativity in the brain.
So this idea must make some sense, right?
BLAKEMORE: This is an idea that makes no physiological sense.
The brain, Blakemore tells us, is plainly divided into two hemispheres, with each one doing more heavy lifting for certain functions. But those hemispheres do not operate in isolation.
BLAKEMORE: There’s a fibrous tract in the middle of your brain that connects up the two hemispheres, and that tract, called the corpus callosum, enables the two hemispheres to talk to each other within a few milliseconds. It’s simply not possible for one hemisphere to function without the other hemisphere joining in.
So where did the left brain/right brain idea come from? Blakemore says it most likely began as a misreading of earlier research on a small number of patients whose two brain hemispheres couldn’t communicate.
BLAKEMORE: Back in the 60s, 70s and 80s, there was quite a lot of very high-impact, extremely interesting research on split-brained patients who have their corpus callosum surgically removed, mostly for intractable epilepsy. It’s not done anymore, but back then it was done a few times.
These rare patients were studied by a professor of psychology, now at the University of California, Santa Barbara, named Mike Gazzaniga.
BLAKEMORE: And what he found was that each hemisphere played a role in different tasks and different cognitive functions and that, normally, one hemisphere dominated over the other. What the patients were aware of was what was going on in their left hemisphere. They didn’t have much conscious access to what was going on in their right hemisphere. This is really interesting and important scientific work. But what I think happened was that it was slightly misinterpreted, in the general public, to suggest that all of us are either left-brained or right-brained. But actually most of us have a functioning corpus callosum, and so we use both our hemispheres all the time.
And yet, Blakemore says, the common perception today is still that most of us are either left-brained or right-brained. And that, she says, is getting seriously in the way of progress.
BLAKEMORE: What really worries me is that it is having a large impact in education. My research involves teenagers. We go into schools a lot, and what we see is often children being classified as being either left-brained or right-brained. Actually, it could be a real impediment to learning, mostly because that implies that it’s fixed, innate and unchangeable to a large degree. There are huge individual differences in cognitive strengths. Some people are more creative than others. Other people are more analytical than others. But the idea that this is something to do with being left-brained or right-brained is completely untrue and needs to be retired.
* * *
I’d like to say that today’s episode was our idea, that we thought up this notion of drawing up a hit-list for outdated ideas. But we are not that clever. Here’s one of the clever people.
John BROCKMAN: For want of a better description, I call myself a cultural impresario.
That’s John Brockman. He makes his living as a literary agent, but for decades he’s also been a curator of great minds and big ideas. Years ago, he organized something called The Reality Club.
BROCKMAN: The idea was that we would seek out the most interesting, brilliant minds, have them get up in front of the group — which was the way they could get in the group — and ask aloud the questions they were asking themselves.
The group changed over time and, in the 1990s, it migrated online. Now it’s known as Edge.org. It’s sort of a salon, populated mostly by scientists — from the hard sciences and social sciences — but there are writers and others as well. A tradition arose within the salon: every year, one question would be put to the entire community, and everyone would write an essay in response. Something like “What should we be worried about?” or “What do you believe is true, even though you cannot prove it?”
BROCKMAN: That is the best question ever. It drove people mad.
Every year, the essays are collected in a book. The latest book is called This Idea Must Die: Scientific Theories That Are Blocking Progress. Here’s the question everyone was asked to answer: what scientific idea is ready for retirement? The question came from an Edge.org contributor named Laurie Santos.
Laurie SANTOS: I’m a professor of psychology at Yale University, and I’m also the director of the Comparative Cognition Laboratory.
The question arose from Santos’s own academic work.
SANTOS: Sometimes, once something gets in print, or gets in a textbook, or gets on people’s public radar, it just sticks around even if there’s reason to suspect that the idea’s just wrong. It seems like there’s no good procedure to retire bad ideas in science. I’m a psychologist and I sometimes dabble in the work of economics. If I’m not really in the trenches, I might not know the ideas that economists are like, “Guys, we stopped paying attention to that ten years ago.” It’d just be nice to get all our ideas crisp, and get the ones that are not doing us service out of there, so we can focus on the stuff that we do think is true.
Edge.org received 175 contributions for ideas that must die, whether because they’re simply outdated, or have been superseded, or have no basis in fact, or just don’t sit right with the world anymore. This episode presents a handful of these ideas and we — in the spirit of overturning our habits — will also try something new. Rather than hearing me interviewing our guests, badgering them with questions, you’ll hear what is essentially a series of soliloquies — from scientists…
Sam ARBESMAN: I’m Sam Arbesman. I am a complexity scientist and writer.
Paul BLOOM: My name is Paul Bloom. I’m a psychology professor at Yale University.
… to doctors …
Azra RAZA: My name is Azra Raza. I am an oncologist, professor of medicine and director of the MDS Center at Columbia University in New York.
… to an actor and writer who used to play a doctor on TV …
Alan ALDA: My name is Alan Alda. I love science, and I love to read about science.
… and we even hear from an economist friend of ours …
LEVITT: When I think about ideas that are getting in the way of progress, I have a strange one. It’s probably one of the most unpopular ideas that you and I have ever talked about.
Let’s begin here:
LLOYD: My name is Seth Lloyd. I’m professor of quantum mechanical engineering at M.I.T.
One quick warning: we are not going for trivial ideas here. We’re going big. Very big.
LLOYD: The idea that I believe is ready to retire is “the universe.” Over the last 20 years or so, it’s become increasingly clear that the idea of the universe as just the things that we can see through our telescopes — even though we can see ten billion light years away — is an outmoded idea. Now, the conventional picture of how the universe came about is that it started 13.8 billion years ago in a gigantic explosion, which is called the Big Bang. It was tremendously hot. It was full of all kinds of particles zipping around here and there. Then, gradually, as the universe expanded, it cooled down. Galaxies started to form. Stars started to shine. Then we’re left with the universe that we see around us. That’s true, so far as it goes. If we look around us and see these galaxies flying through the cosmos, their existence, their composition, their form can be very well explained by this theory of the Big Bang.
But the universe that we see around us is just one part of a much larger, multi-faceted “multiverse” in which there are many possible universes contained. The current theories suggest that this universe we see with electrons and photons, and galaxies, and stars, and planets and human beings — this is just one possible way for things to be. If you were to go far enough out there, you’d find pieces of the universe where things are entirely different, where there are no electrons, no stars, no planets. If you go far enough out there, you’ll basically find all possible combinations of what’s allowed by the laws of physics playing themselves out, because our universe is effectively a giant computer. Everything that can possibly be computed is being computed. This notion is a rather new notion. It hasn’t really percolated into human consciousness.
But once one’s given up this piece of useless baggage — that there is only one universe — we really are forced to contemplate the actual physical existence of things beyond what we can just have experimental and observational access to. It gives us a nice explanation for why the universe is so darn intricate and complicated.
Emanuel DERMAN: My name’s Emanuel Derman. I’m a professor at Columbia University, and I worked on Wall Street for about 20 years as a quantitative analyst. The scientific idea that I believe is ready for retirement is one that’s very fashionable now, and that’s the use and the power of statistics. It’s a subject that’s become increasingly popular with increasing power of computers, computer science, information technology. Everybody’s interest in economics and big data, which have all come together in some nexus to make people think that just looking at data is going to be enough to tell you truths about the world. I don’t really believe that. There are ways to understand the world, and those ways involve understanding the deep structure of the world and the way the world behaves.
I can give examples from physics. I worked as a particle theoretical physicist for a long time, and all the really great discoveries in physics have come from a burst of intuition, which people tend to look down on these days. Johannes Kepler was an astronomer about 50 years or so before Newton, and he actually spent a lot of time studying Tycho Brahe, who was a Danish observational astronomer who collected tons of very detailed data, the position of the planets. Kepler got access to them and, over 30 or 40 years, analyzed them. Actually, it was an astonishing feat. If you think about what you see when you see the trajectories of lights in the sky, which are planets, you see their motion relative to the Earth. But what Kepler was interested in [was] their motion relative to the sun. The Earth is moving around the sun, so God knows how he did it.
But he had to extract out the motion of the Earth from the whole picture and describe how the planet moves relative to the sun. How he did this without computers is quite beyond me. But in the end, his second law says that the line between the planet and the sun sweeps out equal areas in equal times. It’s an astonishing thing to say, because he’s describing an invisible line between the sun and the planet. There is no line between the sun and the planet, and yet he’s come up with this burst of intuition, which lets him talk about something you can’t see and that isn’t in the data. That’s a good instance of the bursts of insight that people have when they use their intuition to make great discoveries. There’s no understanding how he came to look at things in that way.
But what’s fashionable these days is simply doing statistics and correlations. I don’t believe you can really find deep truths, like Kepler’s Laws that are trying to describe something below the surface, simply by looking at data. It’s what’s wrong with a lot of financial modeling too — the idea that somewhere there’s a formula that will tell you how to manage risks, tell you how to price things, and absolve you of the responsibility of the struggle to actually understand the world in a deeper way. In the financial crisis, too much reliance on statistics is what got people into trouble, thinking that bad things could never happen because they hadn’t happened before.
RAZA: My name is Azra Raza. I am an oncologist, professor of medicine and director of the MDS Center at Columbia University in New York. The scientific idea that I believe is ready for retirement is “mouse models.” [They] must be retired from use in drug development for cancer therapy, because what you see in a mouse is not necessarily what you are going to see in humans. For example, one very simple mouse model would be, we take a mouse and give it a drug and see what happens to it. Another, which is much more commonly used, is called xenograft mouse model, in which, what we do is that we will take a mouse and we will use radiation therapy, et cetera, to destroy its immune system completely. Now, we will transplant a tumor taken from a human into this mouse model.
Its own immune system is gone, so it won’t reject the tumor, and we can then test the efficacy of a drug to kill these human cells in the xenografted mouse model. Now, currently, cancer affects one in two men and one in three women. It’s obvious that, despite concerted efforts of thousands of investigators, cancer therapy is today like beating a dog with a stick to get rid of its fleas. It’s really, in general, quite primitive. In fact, the acute myeloid leukemia — the disease I’ve been studying — we are giving the same drugs today for the majority of patients that we were giving in 1977 when I started my research in this area. When compared to things like infectious diseases or cardiac drugs, cancer drugs fail more often.
Recently, things have improved. From the mid-90s to now, about 20% of drugs are actually entering clinical trials and F.D.A. approved. But 90% of the drugs still fail because of either unacceptable toxicity, or once we give them to humans, we find that they’re not working the way they were supposed to. Why are these facts so grim? Because we have used a mouse model that is misleading. They do not mimic human disease well, and they’re essentially worthless for drug development. It’s very clear that if we are to improve cancer therapy, we have to study human cancer cells. But, in my opinion, too many eminent laboratories and illustrious researchers have devoted entire lives to studying malignant diseases in mouse models. They are the ones reviewing each others’ grants and deciding where money gets spent.
They’re not prepared to accept that mouse models are basically valueless for most of cancer therapeutics. But persisting with mouse models and trying to treat all cancers in this exceedingly artificial system will be a real drawback to proceeding with personalized care based on a patient’s own specific tumor, its genetic characteristics, its expression profile, its metabolomics — all those things are so individually determined in cancer. For a lot of patients, the drugs are already there. We just have to know how to match the right drug to the right patient at the right time. In order to do that, the answer is not going to come from mouse models, but it’s going to come from studying human cancers directly. Mice just are not men.
RUSHKOFF: The scientific idea I believe is ready for retirement is the atheism prerequisite, the idea that the only way science can work is if we assume we live in a godless, meaningless universe. My name’s Douglas Rushkoff. I’m a professor of media studies at Queens College CUNY. The assumption that we live in a godless, meaningless universe makes people assume that reality emerged from this Big Bang. And then time begins, as if everything that we know, everything that we think — everything from civilization, to consciousness, to meaning — are all emergent phenomena, that they’re all a result of matter doing various materialist things. When I started to realize that much of science’s insistence on atheism was suspect was when I start hearing these folks talk about the “Singularity.”
They have a narrative for how consciousness develops: that information itself was striving for higher states of complexity. Information made little atoms, then molecules — because molecules are more complex — and then little cells, and little organisms, and finally human beings and civilization. All more and more complex homes for information. Now, computers are coming, which will be even more complex than people, so information can just migrate from human consciousness into artificial intelligence, at which point the human species can just fade away. That’s when I realized, “They’ve created their equally mythological story for what’s happening with a beginning, a middle and an end, which is just as archaic, just as arbitrary as any of the religious narratives out there.”
The irony, for me, is that it’s the most outspokenly godless of the scientists who fall most tragically in the spell of this story structure.
The people I’m asking to retire this idea are scientists — evolutionary biologists that seem to need to start the universe from zero in order for their models to make sense. What if we don’t have to make science and our view of reality conform to the basic story structure of beginning, middle and end? If there was something here before the Big Bang, then the story that science is trying to tell doesn’t really work. I’m not saying that people can’t be atheists. Honestly, I have no idea what’s going on here. I don’t know if there’s a God or not. I don’t know if there’s meaning or not. But what I’m saying is that atheism can’t be a prerequisite for the scientific model, because if you are forcing yourself to strip meaning from reality in order to cope with it — in order to explore it and observe it — then you’re tying your hands behind your back, and you’re missing a huge potential portion of the picture.
All right, we’ve already heard five ideas that should maybe be sent to the trash bin: the “atheism prerequisite” for scientists; the value of mouse models for human medicine (which, I admit, stunned me); the idea that statistics are as powerful and useful as we think; the idea of “the universe”; and the left brain/right brain construct. Coming up on Freakonomics Radio, some other ideas we might want to get rid of, including:
ALDA: The idea that things are either true or false.
And:
BLOOM: The idea that science can tell us everything we need to know about how to be happy.
And:
Michael NORTON: The idea that markets are good.
Really? You sure about that?
NORTON: And the second idea that I think is ready for retirement is the idea that markets are bad.
* * *
They are out there. Bad ideas — or, if not bad ideas, ideas that have at least outlived their usefulness, and are now standing in the way. They’re clogging up our brains, our academic departments, our research labs, our popular culture. Which is why Edge.org has published a book called This Idea Must Die.
ALDA: I’m not sure any ideas have to die.
That’s Alan Alda.
ALDA: I’m an actor and a writer.
You probably know Alda from the epic TV series M*A*S*H, or more recently from The West Wing or 30 Rock. What you may not know is that Alda also has a long-held passion for science.
ALDA: Like most kids, I was very interested in science. When I was a six-year-old boy, I used to do what I thought were experiments — trying to mix toothpaste and my mother’s face powder to see if it would blow up. That seems to be the basis of a lot of science, actually, starting with Alfred Nobel. But I never lost that curiosity. And when I wrote for M*A*S*H — I wrote about 20 or 25 episodes — whenever there was a medical procedure, I would research it as carefully as I could. I’d go to a medical library, get out the books and find out exactly how the operation was done.
Capt. Benjamin Franklin “Hawkeye” PIERCE in a clip from M*A*S*H: In this particular mobile army hospital, we’re not concerned with the ultimate reconstruction of the patient. We only care about getting the kid out of here alive enough for someone else to put on the fine touches. We work fast and we’re not dainty, because a lot of these kids who can stand two hours on the table just can’t stand one second more.
ALDA: Walter Dishell was our medical advisor. He had the wonderful ability to not only tell you what disease or operation might apply to the story, but he could help you figure out how the story would benefit by the various stages of that disease, or the techniques in the operation.
These days, Alan Alda is a visiting professor at the Alan Alda Center for Communicating Science at Stony Brook University on Long Island.
ALDA: I love science, and I love to read about science. I’m very concerned about how science is communicated. For the last 25 years, I’ve spent a lot of my time trying to help scientists communicate about their work so that ordinary people like me can understand it. Now at the Center for Communicating Science at Stony Brook University, we train scientists in unusual ways. We train them to relate to their audience, first of all, by introducing them to improvisation exercises. That is not to make them performers, or make them comics, or get them to invent things on their feet, which is what we usually think of in terms of improvising. It’s to get them to relate, which the improvising exercises all do. They put in you in a position where you have to observe the other player, and you have read the other player’s face and tone of voice. In a way, you have to read the other person’s mind. That’s the basis of good communication. You’ve got to know what’s going on in the mind of the person listening to you to know if you’re getting through to them or not.
Alda wrote an essay for This Idea Must Die, but he’s a little bit squeamish about the premise.
ALDA: It’s eye-catching to say this idea must die, and I’m not sure that most of the articles in the Edge.org catalog of things that need to be retired actually need to be retired or just rethought. Therefore, I would say that asking for these ideas to be retired is really a way of saying, “This is the received wisdom. Do we need to reexamine it?” That’s a good approach to take.
The idea that maybe is due for a rest — you notice I said, “Needs to take a rest,” I didn’t say it needed to “die” — is the idea that things are either true or false. I know that’s an impertinent thing to say and it sounds stupid. But what I mean by it is: the idea that something is either true or false for all time and in all respects. I think about this because when I was being taught to think in school, I was taught that the first rule of logic is that a thing cannot both be and not be at the same time and in the same respect. And that last part, “in the same respect,” really has a lot to do with it, because something is determined to be true through research. Then further research finds out that it’s only true under certain conditions, or that there are other factors that are involved.
Here’s a very interesting example: a lot of people were interested — I know I was interested — when I read that red wine was good for you. At first, we might have even thought, “The more red wine the better. Look at all that antioxidant stuff going into it.” But it was a terrible disappointment sometime later, when some other scientists said, “Under certain conditions, red wine could be not so good for you.” Again, there’s this other thing that it might be really great for mice and less good for us. But what really disturbs me is when the public decides that that means that science can’t make up its mind, or that scientists are just making things up. Some people actually do think that. Some people think the findings in science are hogwash, because if one day they say one thing and the next day they say another thing, it sounds like they just are taking wild guesses at things, when in fact, the progress of science is just that.
You go deeper and deeper. You open up one door and you find another hundred doors that have locks on them that you have to figure out the combination for. I personally find it exciting to see what we thought we understood to be contradicted, but I don’t think the public has enough of a grasp of how science is done, how it’s based on evidence. When you say this is true, in the mind of the person receiving that information, they’re going to accept it as true for all time, under all circumstances, unless you warn them that things might change in the future. We might learn more about this. That shift in the frame of reference is something that ought to be allowed for. I want to see science prosper. I want to see evidential thinking be the norm for the public, as it is for scientists. So my suggestion, that we alter the way we talk about things being true or false, is really to help in the communication of science so that people don’t get confused.
Alun ANDERSON: My name is Alun Anderson. I’m a science journalist and writer. The idea that I think should be retired is the notion that we are all still Stone Age thinkers; that because of this long period we spent as hunter gatherers perhaps 200,000 years before the appearance of agriculture, that we’re stuck still with all those reflexes, all those motivations that worked so well a long time ago. Living in this modern world, you still want to bash people over the head with a rock, or you’re told by an expert that the best way to look after a baby is the same way that would have appealed to someone living in a cave. It’s that notion that a lot of the stresses and strains of modern life have been caused by a disconnect between what we are biologically, and what culture has created for us.
This doesn’t really gel with what we know about just how flexible and adaptable the human brain is, and how it can be rewired to do quite wonderful things that didn’t happen during the Stone Age. We have lots of really good scientific evidence of how the brain can be changed by culture, and how that change in the brain can then be passed on to later generations. If we take reading as an example— reading and writing emerged only around 5,000 years ago. But there’s no doubt reading — with its access to lots more information, the ability to share your thoughts with others — is a massive change in how the world works. If you looked inside the brain of a person who can read, and scan the brain so you see which bits light up when they’re reading and when they’re talking, you’ll see their brain has been massively remodeled.
All kinds of new pathways have formed, which link areas to do with visual perception and to do with hearing. You’ll see it’s profoundly different from a person who can’t read. A person who can read has got a new brain. But it’s not in any way inherited. In each generation, we’ve got really good at teaching children how to make this change for themselves so they become a different person. A change in the brain changes the person, changes the cultural process, changes more people, and that’s how culture shapes brains. Cultural evolution, and the force of cultural change, has been greatly underestimated when people talk about the Stone Age mentality. To go through life thinking that we are trapped by what we are already holds us back from embracing what we might become in the future.
BLOOM: The idea I believe is ready for retirement is that science can tell us everything we need to know about how to be happy. My name is Paul Bloom. I’m a psychology professor at Yale University. I wouldn’t deny for a second that science can tell us a lot about happiness. It can tell us how to cure depression. It can tell us some surprising things about what aspects of our everyday life makes us happy, and what don’t. But the idea that science can give us a complete theory of how to live a happy life is mistaken, and mistaken in important ways. There’s two main limitations of science in the domain of happiness: one is the notion of what it is to be happy. Which, of all the things that go on in the brain, should count as happiness? Nobody knows the answer, and it’s not the answer you’re going to find out by studying in a lab.
If you tell me happiness is a lot of pleasure — suppose you have a terrific meal, then some wonderful sex, and then you read this great book. Just terrific time. Compare that to a really difficult time where you help a lot of people and you feel a satisfaction. Both of these events correspond to activity in the brain. Which one is real happiness? Which one should we be trying to maximize? But a second independent problem: suppose we decide what it is to make happy day, and we agree on it, there’s no argument. We’ve settled that. Still, how do you decide how to sum up days to make a happy life? Is it better to live 90 so-so years or 30 really happy years, even though some other days may be miserable? You can know everything in the world about the brain and that won’t tell you the answer.
In fact, what’s interesting is these problems are very similar to problems like: how do you maximize happiness in a society? Is the best society one that has a lot of happy people and the total sum of happiness is very high, even though some people might be miserable, living horrible lives? Or is a better society one where the average happiness is very high, even though it may be not as much of a total happiness as the first society? Those are hard questions, and they aren’t scientific ones. But some scholars tend to be over-enthusiastic as to what science can tell you. There’s a huge literature of people who will directly argue that the key to figuring out how to live a good life and how to live a happy life is revealed by laboratory studies and science.
You can say, “Who cares? Who cares if many scientists and many psychologists believe that their research will tell us everything we need to know about happiness? Why does it matter? Why does it cause any problems?” One answer is when scientists overreach, and people see them overreaching, it causes lack of trust in science. The second problem is it’s a missed opportunity. The study of how to live a good life is one of the great questions. It’s would benefit from cross-disciplinary work, including philosophers, theologians, artists and a range of scholars, but not just scientists.
ARBESMAN: I’m Sam Arbesman. I am a complexity scientist and writer. The scientific idea that I think is ready for retirement is the idea that all of science needs to be big science. By big science, I mean the money and the effort that we pour into it, as well as the scale of the technology we use, as well as the scale of the organizations and the teams. We’ve gone from this age when you could be a hobbyist; there used to be this figure of the gentleman scientist. It was very common several hundred years ago [for] individuals who were independently wealthy [to be] tinkering in their country estate or wherever. They were able to make a lot of discoveries. As science has changed over time, we’ve gone from this age when you could be just an individual making discoveries to this idea that you now need lots of money, lots of effort, lots of people in order to make discoveries.
And a lot of people now feel that that’s all there is; that science has gotten bigger, and we have to constantly move this way towards big science. Even though there are many big and major discoveries that are done through big science, there still is a place for little science. Because a lot of scientists now choose to publish their research and the raw data for their research online, we now have huge amounts of data available in a way that was not available before. They’re now available to everyone. That, coupled with this massive availability of tools to help analyze these things, makes it no longer the province of the specialist with a vast amount of money. You can now go on eBay, buy lab apparatus and setup your own biotech lab in your basement. You can buy these things on the cheap and do research yourself.
You won’t necessarily make cutting edge results all the time, but you can still do things and see how it works. There is this democratization of the means of actually making new discoveries, and one of the great things about that is it no longer makes science seem so abstract or different from what everyone else is doing. It’s simply just a rigorous way of asking questions and querying the world. Even though we think that things that came before us, that might be hundreds of years old or even older, have been completely picked over and there’s no new areas to work on, or no new potential for discovery — there’s still a lot available. If we recognize that anyone can play with these things — and they still might fail — but there’s still the potential for doing this little science [that] will help fill in a lot of these holes that the frontier has passed by, which is really exciting.
NORTON: The idea that I think is ready for retirement is the idea that markets are good and the idea that markets are bad. I’m Michael Norton. I’m a professor at Harvard Business School. Different people have different views of how markets work. Some people think markets are amazing and they solve all of our problems. Other people think that markets are terrible and they’re a source of misery for humans. The idea that markets are good is this sense that in the aggregate, and across all individuals, and across all decisions, that things are optimal in a sense. That when everyone is trying to buy the things they want, and everyone’s making things for people to buy, that those markets become efficient. For example, the stock market can become efficient because people eventually evaluate things correctly and everything works really smoothly.
The other view of markets — that they’re terrible — is that it doesn’t make any sense that markets would be good because markets are made up of individuals. We know that individuals are extremely biased, including me, where we make all sorts of mistakes. The idea that aggregating up mistakes would somehow solve the mistakes, to many people, rings as completely wrong. People looking at markets in this black and white way means that there’s very little dialogue between people who hold these different views, because it’s almost as though the other side is so blind to understanding how markets really work. If you believe in market efficiency, when any one on the other side says, “I don’t believe markets are efficient, maybe we need to do some tweaks to make the market efficient,” you think they literally don’t understand how markets work.
In a sense, this underlies nearly every political and public policy decision we’re arguing about and making today. If you think about health care, and the housing market, and income inequality — all of these current debates basically have, at at least part of their core, the idea that markets work just fine and don’t need government, versus markets don’t work just fine and really need government. This lack of conversation across these two diametrically-opposed views partly drives misunderstanding around these public-policy debates. The idea is if you think about how markets work, what they are is an aggregation of individuals, or sometimes we call them groups. Groups of people have come together to cure diseases, to save the world, amazing things. Groups of people have come together and caused religious conflict and caused horrible things to happen.
We don’t necessarily think that groups are either good or groups are bad. We think, in fact, that they can be good and they can be bad. The market in a sense is just the biggest group, and it seems likely that if groups that we know that are a little bit smaller than the market can be good or bad, probably the market itself can be good or bad as well. That view of markets might help us understand, again, not that they’re good or bad, but really deeply understand when they do well and when they do poorly.
DUBNER: Levitt, do you feel, generally, that people — especially kind of academic, elite people — put too much emphasis on looking for new ideas rather than, perhaps, you know, killing off old ones?
LEVITT: That’s a…I never thought of that in my entire life, whether people do too much of that.
That’s Steve Levitt, my Freakonomics friend and co-author. He is an economist at the University of Chicago.
LEVITT: I love the idea of killing off bad ideas, because if there’s one thing that I know in my own life, it’s that ideas that I’ve been told a long time ago stick with me. You often forget whether they have good sources or whether they’re real. You just live by them. They make sense. Especially, the worst old ideas are the ones that are intuitive, the ones that fit with your worldview. Unless you have something really strong to challenge them, you hang on to them forever.
DUBNER: Give me a for-instance.
LEVITT: When I think about ideas that are getting in the way of progress, I have a strange one. It’s probably one of the most unpopular ideas that you and I have ever talked about. I think an idea that is really bad, that’s detrimental to society, is the idea that life is sacred. I know you and everyone else want to go, “What’s wrong with this guy?” But you’ve got to hear me out for one second, okay?
DUBNER: Yep.
LEVITT: Clearly my own life, to me, has almost infinite value. We know people will fight like crazy and do anything to stay alive. But the problem is that as a society, we really have taken that to heart. Anything we do, like trying to limit health care or access at the end of life to various kinds of medical stuff, feels awful to us, okay? Even other things that people would do voluntarily, like selling their organs — which might induce some greater likelihood of death at some point, but in return for financial gain along the way — people hate ideas like that. It’s true in the U.S. and Europe, without a doubt, that there’s a view that life is an entitlement, and the protection of life is an entitlement.
Here’s why that’s such a bad idea: when you look at the progress that we’ve made in society, so much of the progress over the last 100 years has been in keeping people alive. It’s incredible — through medicine, and antibiotics, and other things, we’ve managed to increase life expectancy. It’s a dimension in which we have a lot of power. We’re good at it, but the problem with this idea that every last life is valuable and every life should be saved, essentially at any cost, is that the innovations that we end up making — and the expense which is exact in terms of G.D.P. — end up being huge. The problem is that right now, health care costs are spiraling out of control. Almost 20% of our G.D.P. is spent on health care. But much of it is not effective, and it’s not effective because we hold the idea that everyone needs to be kept alive no matter what.
We do incredibly expensive things. We encourage innovation by pharmaceutical companies and by medical device makers, which find solutions at any cost, even though, in the end — if you think about health as being like any other good, or living being like any other good, you buy it, sell it, and it has a price. If you don’t have enough money, you just can’t stay alive forever — you would organize the market in a very different way. People would make different choices. The innovation that we do would presumably be much more effective and efficient innovation, because people would have to develop the solutions that you and I would pay for out of our own pocket, as opposed to solutions where we just say, “Well, the government is going to pay for it anyway, so it doesn’t matter if a chemotherapy only extends life by three weeks and it costs $400,000. We’re going to give it to people anyway.”
That encourages all the wrong kind of innovation.
Look, I love my own life. I love it more than anything, and as many resources as I have, if I’m facing death, I’m probably willing to spend that money to try to keep me alive. But as much as I like you, Dubner, I don’t like your life infinitely, right? I wouldn’t probably drain every penny of my savings account to prolong your life by six months.
DUBNER: For six months? Let’s say a year. Let’s say I’m dying, and as of today, I know that I’m going to die one year from now. But I could get two years with the right interventions that will be very costly. How much of your net worth would you spend, Levitt?
LEVITT: Now, are you going to spend the year writing a great book with me? What are you going to do? Just enjoy life, play golf?
DUBNER: Oh, so there’s some self-interest here.
LEVITT: Aside from self-interest, purely out of my deep love for you? I might spend 5 or 10 percent of my wealth to keep you alive for a year.
DUBNER: That’s not very much. That’s not very much at all.
LEVITT: That’s a lot to me. That’s a lot.
DUBNER: Oh my God, that’s less than a sales tax.
LEVITT: How about a pure stranger? If someone said, “You’re the only person who could save this other person,” what share of your wealth would you give to give a complete and total stranger an extra year of life? Next to nothing, right?
DUBNER: I’d have to say next to nothing, yeah.
LEVITT: Yeah, because there’s too many total other strangers. The problem is that the way that we’ve organized society is that none of us really care very much about anyone else. But the idea is that if we don’t care about anyone else, then we know that no one cares about us either, so we have to pass laws that say that the government, society, health care — we have to be taken care of. We have to be saved. But it’s actually the wrong way to think about the problem from an economic perspective. Look, I’m not saying the market is the only thing that works, or the greatest thing, but we’ve accepted it as the way that we live our lives.
I believe that markets should — or maybe should is the wrong word — will eventually have to function more as health care gets to be not-increasingly expensive, and the approach we’ve taken now becomes less and less feasible. A different organization of health care delivery and of decision making about life, to me, is really central to making progress.
DUBNER: I hear you. I’m still a little hung up on the fact that you’re only going to spend 5 percent of your net worth on extending my life. On the other hand, it’s only a year.
LEVITT: Wait, what are you going to do for me? If I’m dying?
DUBNER: Well, same question? One extra year? Look, it’s free for me to say, so I’ll say 90 percent. Would I actually do that?
LEVITT: Such a joke.
DUBNER: But here’s the deal about your 5 percent offer: if you lost 5 percent of your net worth overnight — which is possible, you could. It could be a bad day in the stock market and the horse track to lose 5 percent of your net worth, you would barely notice. It would not affect your daily life at all, I would argue. If you lost me overnight, I would like to think, you would at least know that something happened. So—.
LEVITT: I’d know by all the free time I would have. I would constantly realize you were gone.
DUBNER: Actually, maybe we have the arrow going in the wrong direction. Maybe you’d be willing to pay for me to get offed.
Thanks to John Brockman at Edge.org and all our guests today: Steve Levitt, Michael Norton, Sam Arbesman, Paul Bloom, Alun Anderson, Alan Alda, Douglas Rushkoff, Azra Raza, Emanuel Derman, Seth Lloyd, Laurie Santos, and Sarah-Jayne Blakemore. Thanks to Christopher Werth for his excellent production work on this episode. And most of all, thanks to you for listening. I’m guessing you may have something to say about all the ideas sentenced to death here today, so tell us what you’re thinking. You can find us on Twitter and Facebook. And here’s an idea that isn’t worth killing off — subscribing to Freakonomics Radio. Just go to Apple Podcasts or wherever else you get your podcasts, find that subscribe button, and we will sneak into your podcast-listening device every Wednesday at midnight Eastern Time and deliver a fresh episode. For free. You’re welcome.
Comments