Episode Transcript
Hey there, it’s Stephen Dubner. Today, a holiday treat — a bonus episode from People I (Mostly) Admire, one of the other shows we make here at the Freakonomics Radio Network. It’s an interview show hosted by Steve Levitt, my Freakonomics friend and co-author, who is an economics professor emeritus now, at the University of Chicago. On this episode, Levitt interviews David Eagleman, a neuroscientist, entrepreneur, and author of several books, including Livewired: The Inside Story of the Ever-Changing Brain. It is a fascinating conversation; you’re going to love it. To hear more conversations like this, follow People I (Mostly) Admire in your podcast app. Okay, that’s it from me — here is Steve Levitt.
* * *
I love podcast guests who change the way I think about some important aspect of the world. A great example is my guest today, David Eagleman. He’s a Stanford neuroscientist whose work on brain plasticity has completely transformed my understanding of the human brain and its possibilities.
David EAGLEMAN: The human brain is about three pounds, it’s locked in silence and darkness, it has no idea where the information is coming from because everything is just electrical spikes and also chemical releases as a result of those spikes. And so what you have in there is this giant symphony of electrical activity going on, and its job is to create a model of the outside world.
Welcome to People I (Mostly) Admire, with Steve Levitt.
According to Eagleman, the brain is constantly trying to predict the world around it. But of course the world is unpredictable and surprising. So the brain is constantly updating its model. The capacity of our brains to be ever changing is usually referred to as plasticity. But Eagleman offers another term: livewired. That’s where our conversation begins.
* * *
David EAGLEMAN: “Plasticity” is the term used in the field because the great neuroscientist — or psychologist, actually — William James coined the term because he was impressed with the way that plastic gets manufactured, where you mold it into a shape and it holds onto that shape. And he thought, that’s kind of like what the brain does. The great trick that Mother Nature figured out was to drop us into the world half-baked. If you look at the way an alligator drops into the world, it essentially is pre-programmed. It eats, mates, sleeps, does whatever it’s doing. But we spend our first several years absorbing the world around us based on our neighborhood and our moment in time and our culture and our friends and our universities. We absorb all of that, such that we can then springboard off of that and create our own things. There are many things that are essentially pre-programmed in us, but we are incredibly flexible, and that is the key about livewiring. When I ask you to think of the name of your fifth-grade teacher, you might be able to pull that up, even though it’s been years since you saw that fifth-grade teacher, but somehow there was a change made in your brain and that stayed in place. We’ve got 86 billion neurons. Each neuron is as complicated as a city. This entire forest of neurons every moment of your life is changing. It’s reconfiguring. It’s strengthening connections here and there. It’s actually unplugging over here and replugging over there. And so that’s why I’ve started to feel that the term “plasticity” is maybe underreporting what’s going on. And so that’s why I made up the term “livewiring.”
LEVITT: When I went to school, I feel like they taught me the brain was organized around things like senses and emotions, that there were these different parts of the brain that were good for those things. But you make the case that there’s a very different organization of the brain.
EAGLEMAN: It is organized around the senses, but the interesting thing is that the cortex, this wrinkly outer bit, is actually a one-trick pony. It doesn’t matter what you plug in. It’ll say, “Okay, got it. I’ll just wrap myself around that data and figure out what to do with that data.” It turns out that in almost everybody, you have functioning eyeballs that plug into the back of the head, and so we end up calling the back part of the brain the visual cortex. We call this part the auditory cortex, and this the somatosensory cortex that takes in information from the body and so on. So what you learned back in high school or college is correct, most of the time. But what it overlooks is the fact that the brain is so flexible. If a person goes blind or is born blind, that part of the brain that we’re calling the visual cortex, that gets taken over by hearing, by touch, by other things. And so it’s no longer a visual cortex. The same neurons that are there are now doing a totally different job.
LEVITT: So let me pose a question to listeners: Imagine you have a newborn baby. And he or she looks absolutely flawless on the outside, but then upon examination, the doctors discover that half of his or her brain is just missing. A complete hemisphere of the brain, it’s never developed, it’s just empty space. I would expect that would be a fatal defect, or at best, the child would be growing up profoundly mentally disabled.
EAGLEMAN: Turns out the kid will be just fine. You can be born without half the brain, or you can do what’s called a hemispherectomy, which happens to children who have something called Rasmussen’s encephalitis, which is a form of epilepsy that spreads from one hemisphere to the other — the surgical intervention for that is to remove half the brain. You can just imagine as a parent, the horror you would feel if your child had to go in for something like that. But you know what? Kid’s just fine. I can’t take my laptop and rip out half the motherboard, and expect it to still function. But with a brain, with a livewired system, it’ll work.
LEVITT: So I first came to your work because I was so blown away by the idea of human echolocation only to discover that echolocation is only the tip of the iceberg. But could you talk just a bit about echolocation and how quickly with training it can start to substitute for sight?
EAGLEMAN: So it turns out that blind people can make all kinds of sounds — either with their mouth, like clicking or the tip of their cane or snapping their fingers, anything like this — and they can get really good at determining what is coming back as echoes and figure out, “Oh, okay, this is an open space in front of me. Here, there’s something in front of me. It’s probably a parked car. And oh, there’s a little gap between two parked cars here, so I can go in here.” The key is the visual part of the brain is no longer being used, because for whatever reason, there’s no information coming down those pipelines anymore. So that part of the brain is taken over by audition, by hearing, and by touch and other things. What happens is that the blind person becomes really good at these other things because they’ve just devoted more real estate to it. And as a result, they can pick up on all kinds of cues that would be very difficult for me and you, because our hearing just isn’t that good.
LEVITT: And then in these studies, you put a blindfold on a person for two or three days, and you try to teach them echolocation. If I understand correctly, even over that time scale, the echolocation starts taking over the visual part of the brain. Is that a fair assessment?
EAGLEMAN: That is exactly right. This was my colleagues at Harvard. They did this over the course of five days. They demonstrated that people could get really good at — there are actually a number of studies like this — they can get really good at reading braille. They can do things like echolocation. And the speed of it was sort of the surprise. But the real surprise for me came along when, they blindfolded people tightly and put them in the brain scanner and they were making sounds or touching the hand. And they were starting to see activity in the visual cortex after 60 minutes of being blind.
LEVITT: So in your book, you talk about REM sleep. And honestly, if I had sat down and tried to come up with an explanation of REM sleep, I could have listed 1,000 ideas. Your pet theory would not be one of them. So explain what REM sleep is and then tell me why you think we do it.
EAGLEMAN: REM sleep is rapid eye movement sleep. We have this every night, about every 90 minutes. And that’s when you dream. So if you wake someone up when their eyes are moving rapidly, and you say, “Hey, what are you thinking about?” They’ll say, “Well, I was just riding a camel across a meadow.” But if you wake them up at other parts of their sleep, they typically won’t have anything going on. So that’s how we know we dream during REM sleep. But here’s the key: My student and I realized that at nighttime, when the planet rotates — we spend half our time in darkness. And obviously, we’re very used to this electricity-blessed world, but think about this in historical time, over the course of hundreds of millions of years. It’s really dark. I mean, half the time, you are in blackness. Now, you can still hear and touch and taste and smell in the dark, but the visual system is at a disadvantage whenever the planet rotates into darkness. And so, given the rapidity with which other systems can encroach on that, what we realized is, it needs a way of defending itself against takeover every single night. And that’s what dreams are about. So what happens is you have these midbrain mechanisms that simply blast random activity into the visual cortex every 90 minutes during the night. And when you get activity in the visual cortex, you say, “Oh, I’m seeing things.” And because the brain is a storyteller, you can’t activate all the stuff without feeling like there’s a whole story going on there. But the fascinating thing is, when you look at this circuitry carefully, it’s super specific, much more specific than almost anything else in the brain. It’s only hitting the primary visual cortex and nothing else. And so that led us to a completely new theory about dreams. We studied 25 different species of primates, and we looked at the amount of REM sleep they have every night, and we also looked at how plastic they are as a species. It turns out that the amount of dream sleep that a creature has exactly correlates with how plastic they are. Which is to say: if your visual system is in danger of getting taken over because your brain is very flexible, then you have to have more dream sleep. And by the way, when you look at human infants, they have tons of dream sleep at the beginning, when their brains are very plastic. And as they age, the amount of dream sleep goes down.
LEVITT: Have you convinced the sleep scientists this is true, or is this just you believing it right now?
EAGLEMAN: At the moment, there are 19 papers that have cited this and discuss this, and I think it’s right. I mean, look, everything can be wrong, everything is provisional, but it’s the single theory that is quantitative. It’s the single theory about dreams that says not only, “here is an idea for why we dream,” but we can compare across species and the predictions match exactly. No one would have suspected that you’d see a relationship between, you know, how long it takes you to walk or reach adolescence and how much dream sleep you have. But it turns out, that is spot on.
LEVITT: So we talked about echolocation, which uses sound to accomplish tasks that are usually done by vision. And you’ve started a company called Neosensory, which uses touch to accomplish tasks that are usually done with hearing. Can you explain the science behind that?
EAGLEMAN: Given that all the data running around in the brain is just data and the brain doesn’t know where it came from — all it knows is, “Oh, here are electrical spikes,” and it tries to figure out what to do with it — I got really interested in this idea of sensory substitution, which is: can you push information into the brain via an unusual channel? Originally we built a vest that was covered with vibratory motors and we captured sound for people who are deaf. So the vest captures sound, breaks it up from high to low frequency, and you’re feeling the sound on your torso. By the way, this is exactly what the inner ear does. It breaks up sound from high to low frequency and ships that off to the brain. So we’re just transferring the inner ear to the skin of the torso. And it worked. People who are deaf could come to hear the world that way. So I spun this out of my lab as a company, Neosensory, and we shrunk the vest down to a wristband. And we’re on wrists of deaf people all over the world. The other alternative for somebody who’s deaf is a cochlear implant, an invasive surgery. This is much cheaper and does as good a job.
LEVITT: Just to make sure I understand it — sounds happen and this wristband hears the sounds, and then shoots electrical impulses into your wrist that correspond to the high and low frequency?
EAGLEMAN: It’s actually just a vibratory motor. So it’s just like the buzzer in your cell phone, but we have a string of these buzzers all along your wrist. And we’re actually taking advantage of an illusion, which is if I have two motors next to each other and I stimulate them both, you will feel one virtual point right in between. And as I change the strength of those two motors relative to each other, I can move that point around. So we’re actually stimulating 128 virtual points along the wrist.
LEVITT: Do people train? You give them very direct feedback? Or is it more organic?
EAGLEMAN: Great question. It started off where we were doing a lot of training on people. And what we realized is, it’s all the same if we just let it be organic. The key is we just encourage people, “Be in the world.” And that’s it. you see the dog’s mouth moving and you feel the barking on your wrist, or you close the door and you feel that on your wrist, or you say something — you know, most deaf people can speak, and they know what their motor output is, and they’re feeling the input.
LEVITT: Oh, cause they’re hearing their own voice for the first time through this. Oh God, yeah, that’s interesting.
EAGLEMAN: And by the way, that’s how you learned how to use your ears too. You know, when you’re a baby, you’re watching your mother’s mouth move and you’re hearing data coming in your ears and you clap your hands together and you hear something in your ears. It’s the same idea. You’re just training up correlations in the brain about, “Oh, this visual thing seems to always go with that auditory stimulus.”
LEVITT: So then, it seems like if I’m deaf and I see the dog’s mouth moving, and I now associate that with the sound, do the people say that they hear the sound where the dog is? Or is the sound coming from their wrist?
EAGLEMAN: For the first few months, you’re hearing it on your wrist. You can get pretty good at these correlations. But then after about six months, if I ask somebody, “When the dog barks, do you feel something on your wrist, and you think, ‘Okay, what was that? Oh, that must’ve been a dog bark,’ and then you look for the dog?” And they say, “No, I just hear the dog out there.” And that sounds so crazy, but remember that’s what your ears are doing. Your ears are capturing vibrations of the eardrum that moves from the middle ear to the inner ear, breaks up to different frequencies, goes off to your brain, goes to your auditory cord. It’s this giant pathway of things. And yet, even though you’re hearing my voice right now inside your head, you think I’m somewhere else. And that’s exactly what happens, irrespective of how you feed the data in.
LEVITT: So you also have a product that helps with tinnitus. Could you explain both what that is, and how your product helps?
EAGLEMAN: So tinnitus is a ringing in the ears. It’s like “beep.” And about 15 percent of the population has this. And for some people it’s really, really bad. It turns out there is a mechanism for helping with tinnitus, which has to do with playing tones and then matching that with stimulation on the skin. People wear the wristband, it’s exactly the same wristband, but we have the phone play tones, “boop, boop, boop, boop, boop, boop.” And you’re feeling that all over your wrist and you just do that for 10 minutes a day. And it drives down the tinnitus. Now, why does that work? There are various theories on this, but I think the simplest version is that your brain is figuring out, “Okay, real sounds always cause this correlating vibration on my wrist, but a fake sound, you know, this thing in my head, that doesn’t have any verification on the wrist. And so that must not be a real sound.” So because of issues of brain plasticity, the brain just reduces the strength of the tinnitus because it learns that it’s not getting any confirmation that that’s a real world sound.
LEVITT: Now, how did you figure out that this bracelet could be used for this?
EAGLEMAN: This was discovered by a woman named Susan Shore, who’s a researcher who discovered this about a decade ago. She was using electrical shocks on the tongue. And there’s actually another company that’s spun out called Lenire that does this with sounds in the ear and shocks on the tongue. They had an argument that they think it had to be touch from the head and the neck and I didn’t buy that at all, and that’s why I tried that with the wristband. So this was not an original idea for us, except to try this on the wrist, and it works equally as well.
LEVITT: So what we’re talking about is substituting between senses. Are there other forms of this? Products that are currently available to consumers or likely to become available soon in this space?
EAGLEMAN: For people who are blind, for example, there are a few different approaches to this. One is called the BrainPort and that’s where, for a blind person, they have a little camera on their glasses and that gets turned into little electrical stimulation on the tongue. So you’re wearing this little electro-tactile grid on your tongue, and it tastes like Pop Rocks sort of in your mouth. Blind people can get pretty good at this. They can navigate complex obstacle courses or throw a ball into a basket at a distance because they can come to see the world through their tongue — which, if that sounds crazy, it’s the same thing as seeing it through these two spheres that are embedded in your skull. It’s just capturing photons and information about them, figuring out where the edges are, and then shipping that back to the brain. The brain can figure that out. There’s also a colleague of mine that makes an app called vOICe. It uses the phone’s camera and it turns that into soundscape. So if you’re moving the camera around, you’re hearing bzzz-ooo-eee. You know, it sounds like a strange cacophony, but it doesn’t take long, even for you as a sighted person, to get used to this and say, “Oh, okay, I’m turning the visual world into sound. And it’s starting to make sense when I pass over an edge or when I zoom into something, the pitch changes, the volume changes.” There’s all kinds of changes in the sound quality that tells you, “Oh yeah, now I’m getting close to something, now I’m getting far, and here’s what the world looks like in sound.”
Coming up after the break:
EAGLEMAN: There’s really no shortage of theoretical ideas in neuroscience, but fundamentally we don’t have enough data.
More of Steve Levitt’s conversation with David Eagleman, in this special episode of People I (Mostly) Admire.
* * *
Okay, back now to this special episode of People I (Mostly) Admire; this is my Freakonomics friend and co-author Steve Levitt in conversation with the neuroscientist David Eagleman.
LEVITT: Elon Musk’s company Neuralink has gotten a ton of attention lately. Could you explain what they’re trying to do and whether you think that’s a promising avenue to explore?
EAGLEMAN: What they’re doing is they’re putting electrodes into the brain to read from and talk to the neurons there.
LEVITT: So what we’ve been talking about so far has been sending signals to the brain. But what Neuralink is trying to do is take signals out of the brain. Is that right?
EAGLEMAN: That is correct. Everything we’ve been talking about so far with sensory substitution, that’s a way of pushing information in, and non-invasive. And what Neuralink is — you have to drill a hole in the head to get to the brain itself, but then you can do reading and writing invasively. That actually has been going on for 60 years. The language of the brain is electrical stimulation, and so with a little tiny wire, essentially, you can zap a neuron and make it pop off, or you can listen to when it’s chattering along going pop, pop, pop, pop, pop, pop, pop, pop. There’s nothing actually new about what Neuralink is doing except that they’re making a one-ton robot that sews the electrodes into the brain so it can do it smaller and tighter and faster than a neurosurgeon can. And by the way, there are a lot of great companies doing this sort of thing with electrodes. As people get access to the brain, we’re finally getting to a point — we’re not there yet — but we’re getting to a point where we’ll finally be able to push theory forward. There’s really no shortage of theoretical ideas in neuroscience, but fundamentally we don’t have enough data. Because as I mentioned, you’ve got these 86 billion neurons all doing their thing, and we have never measured what all these things are doing at the same time. So we have technologies like functional Magnetic Resonance Imaging, fMRI, which measures big blobby volumes of, “Ooh, there was some activity there and some activity there,” but that doesn’t tell us what’s happening at the level of individual neurons. We can currently measure some individual neurons, but not many of them. It’d be like if an alien asked one person in New York City, “Hey, what’s going on here?” And then tried to extrapolate to understand the entire economy of New York City and how that’s all working. So I think we’re finally getting closer to the point where we’ll have real data about, “Wow, this is what thousands or eventually hundreds of thousands or millions of neurons are actually doing in real time at the same moment.” And then we’ll be able to really get progress. I actually think the future is not in things like Neuralink, but the next level past that, which is nanorobotics. This is all theoretical right now, but I don’t think this is more than 20, 30 years off — where you do three-dimensional printing, atomically precise, you make molecular robots, hundreds of millions of these, and then you put them in a capsule and you swallow the capsule. And these little robots swim around and they go into your neurons, these cells in your brain, and from there they can send out little signals saying, “Hey, this neuron just fired.” And once we have that sort of thing, then we can say non-invasively, “Here’s what all these neurons are doing at the same time.” And then we’ll really understand the brain.
LEVITT: I’ve worn a continuous glucose monitor a few times, so you stick this thing in your arm and you leave it there for ten days. And every five minutes, it gives you a reading of your blood-glucose level. It gives you direct feedback on how your body responds to the foods you eat, also to stress or lack of sleep, that you simply don’t get otherwise. I learned more about my metabolism in 10 days than I had over the entire rest of my life combined. What you’re talking about with these nanorobots is obviously in the future, but is there anything now that I can buy and I can strap on my head — and I know it’s not going to be individual neurons — but that would allow me to get feedback about my brainwaves, and be able to learn in that same way I do with a glucose monitor?
EAGLEMAN: What we have now is EEG, electroencephalography, and there are several really good companies like Muse and Emotiv that have come out with at-home methods. You just strap this thing on your head and you can measure what’s going on with your brainwaves. The problem is that brainwaves are still pretty distant from the activity of 86 billion chattering neurons. An analogy would be if you went to your favorite baseball stadium and you attached a few microphones to the outside of the stadium and you listened to a baseball game, but all you could hear with these microphones is occasionally the crack of the bat and the roar of the crowd. And then your job is to reconstruct what baseball is, just from these few little signals you’re getting. So I’m afraid it’s still a pretty crude technology.
LEVITT: I could imagine that I would put one of these EEGs, on and I would just find some feeling I liked, bliss or peace or maybe it’s a feeling induced by drugs and alcohol, and I would be able to see what my brain patterns looked like in those states. Then I could sit around and try to work towards reproducing those same patterns. Now, it might not actually lead to anything good, but in your professional opinion, total waste of time, me trying to do that?
EAGLEMAN: The fact is, if you felt good at some moment in your life and you sat around and tried to reproduce that, I think you’d do just as well thinking about that moment and trying to put yourself in that state rather than trying to match a squiggly line.
LEVITT: You know, I’m a big believer in data though, and it seems like somebody should be building A.I. systems that are able to look at those squiggles and give me feedback. The thing that’s so hard about the brain is that we don’t get direct feedback about what’s going on, which is how the brain is so good at what it does. If the brain didn’t get feedback from the world about what it was doing, it wouldn’t be any good at predicting things. So I’m trying to find a way that I can get feedback. But it sounds like you’re saying I got to live for 20 more years if I want to hope to do that.
EAGLEMAN: I think that’s right. I mean, there’s also this very deep question about what kind of feedback is useful for you. Most of the action in your brain is happening unconsciously. It’s happening well below the surface of your awareness or your ability to access it. And the fact is that your brain works much better that way. Do you play tennis for example?
LEVITT: Not well.
EAGLEMAN: Or golf?
LEVITT: Golf, I play.
EAGLEMAN: Okay, good. So if I ask you, “Hey Steven, tell me exactly how you swing that golf club,” the more you start thinking about it, the worse you’re going to be at it. Because consciousness, when it starts poking around in areas that it doesn’t belong, it’s only going to get worse. And so it is an interesting question about the kind of things that we want to be more conscious of. I’m trying some of these experiments now, actually using my wristband, wearing EEG, and getting a summarized feedback on the wrist. So I don’t have to stare at a screen, but as I’m walking around during the day, I have a sense of what’s going on with this. Or, with the smartwatch, having a sense of what’s going on with my physiology. I’m not sure yet whether it’s useful, or whether those things are unconscious because Mother Nature figured out a long time ago that it’s just as well if it remains unconscious. One thing I’m doing, which is just a wacky experiment just to try it — the smart watch is measuring all these things, we have that data going out, but the key is you have someone else wear the wristband, like your spouse wear the smartwatch and you’re feeling her physiology. And I’m trying to figure out, is this useful tapped into someone else’s physiology? I don’t know if this is good or bad for marriages, but —
LEVITT: What a nightmare.
EAGLEMAN: But I’m just trying to really get at this question of these unconscious signals that we experience, is it better if they’re exposed or better to not expose them?
LEVITT: What have you found empirically?
EAGLEMAN: Empirically what I found is that married couples don’t want to wear it.
LEVITT: So in my lived experience, I walk around and there’s almost non-stop chatter in my head. It’s like there’s a narrator who’s commenting on what I’m observing in the world. My particular voice does a lot of rehearsing of what I’m going to say out loud in the future and a lot of rehashing of past social interactions. Other people have voices in their head that are constantly criticizing and belittling them. But either way, there’s both a voice that’s talking and there’s also some other entity in my head that’s listening to that voice and reacting. Does neuroscience have an explanation for this sort of thing?
EAGLEMAN: In my book Incognito, the way I cast the whole thing is that the right way to think about the brain is like a team of rivals. You know, Lincoln, when he set up his presidential cabinet, he set up several rivals in it, and they were all functioning as a team. That’s really what’s going on under the hood in your head, is you’ve got all these drives that want different things all the time. So if I put a slice of chocolate cake in front of you, Steven, part of your brain says, “Oh, that’s a good energy source, let’s eat it.” Part of your brain says, “No, don’t eat it. It’ll make me overweight.” Part of your brain says, “Okay, I’ll eat it, but I’ll go to the gym tonight.” And the question is, who is talking with whom here? It’s all you. But it’s different parts of you. All these drives are constantly arguing it out. It’s, by the way, generating activity in the same parts of the brain as listening and speaking that you would normally do. It’s just internal before anything comes out.
LEVITT: Language is such an effective form of communicating and of summarizing information that, at least my impression inside my head is that a lot of this is being mediated through language. But I also have this impression that there are parts of my brain that are not very good with language. Maybe I’m crazy, but I have this working theory that the language parts of my brain have really co-opted power. The non-speaking parts of my brain, they actually feel to me like the good parts of me, the interesting parts of me, but I feel like they’re essentially held hostage by the language parts. Does that make any sense?
EAGLEMAN: Well, this might be a good reason for you to keep pursuing possible ways to tap into your brain data. And by the way, it turns out that the internal voice is on a big spectrum across the population, which is to say some people, like you, have a very loud internal radio. I happen to be at the other end of the spectrum, where I have no internal radio at all. I never hear anything in my head. That’s called anendophasia. But everyone is somewhere along this spectrum. One of the points that I’ve always really concentrated on in neuroscience is: what are the actual differences between people? Traditionally that’s been looked at in terms of disease states. But the question is: from person to person who are in the normal part of the distribution, what are the differences between us? It turns out those are manifold. So take something like how clearly you visualize when you imagine something. So if I ask you to imagine a dog running across a flowery meadow towards a cat, you might have something like a movie in your head. Other people have no image at all. They understand it conceptually, but they don’t have any image in their head. And it turns out, when you carefully study this, the whole population is smeared across the spectrum. So, our internal lives from person to person can be quite different.
LEVITT: So when you talk about this spectrum, it makes me think of synesthesia. Could you explain what that is, and how that works?
EAGLEMAN: So I’ve spent about 25 years now studying synesthesia, and that has to do with some percentage of the population has a mixture of the senses. They might look at letters on a page, and that triggers a color experience for them. Or they hear music, and that causes them to see some visual. Or they put some taste in their mouth and it causes them to have a feeling on their fingertips. There are dozens and dozens of forms of synesthesia, but what they all have to do with is a cross-blending of things that are normally separate in the rest of the population.
LEVITT: And what share of the population has these patterns?
EAGLEMAN: So it’s about 3 percent of the population that has colored letters or colored weekdays or months or numbers.
LEVITT: Oh, it’s big. That’s interesting. I wouldn’t have thought it was so big.
EAGLEMAN: The crazy part is that, if you have synesthesia, it probably has never struck you that 97 percent of the population does not see the world the way that you see it. Everyone’s got their own story going on inside, and it’s rare that we stop to consider the possibility that other people do not have the same reality that we do.
LEVITT: And what’s going on in the brain?
EAGLEMAN: In the case of synesthesia, it’s just a little bit of crosstalk between two areas that, in the rest of the population, tend to be separate but neighboring. So it’s like porous borders between two countries. They just get a little bit of data leakage and that’s what causes them to have a joint sensation of something.
LEVITT: People make a big deal out of it when they talk about musicians having this, and they imply that it’s helpful, that it makes them better musicians. Do you think there’s truth to that, or is it just that if three percent of the population has this, then there are going to be some great musicians among them?
EAGLEMAN: I suspect it’s the latter, which is to say everyone loves pointing out synesthetic musicians, but no one has done a study on how many deep-sea divers have synesthesia or how many accountants have synesthesia. And so we don’t really know if it’s disproportionate among musicians.
LEVITT: So you’ve created this database of people who have the condition. And you find a pattern that is completely and totally bizarre, and that is that there’s a big bunch of people who associate the letter A with red, B with orange, C with yellow, it goes on and on. Then they start repeating at G. In general though, you don’t see any patterns at all. Like, people can connect these colors and letters in any way. Do you remember when you first found this pattern and what your thought was?
EAGLEMAN: So typically, as you said, it’s totally idiosyncratic. Each synesthete has his or her own colors for letters. So, my A might be yellow, your A is purple, and so on. And then what happened is, with two colleagues of mine at Stanford, we found in this database of tens of thousands of synesthetes that I’ve collected over the years, we found that starting in the late ’60s, there was some percentage of synesthetes who happened to share exactly the same colors. These synesthetes were in different locations, but they all had the same thing. And then that percentage rose to about 15 percent in the mid ’70s.
LEVITT: So when you saw this, you must have been thinking, “My god, this is important,” right?
EAGLEMAN: Exactly, right. The question is: how could these people be sharing the same pattern? What we had always suspected is that maybe there was some imprinting that happens, which is to say, there’s a quilt in your grandmother’s house that has a red A and a yellow B and a purple C and so on. But, you know, everyone has different things that they grow up with as little kids. And so it was strange that this was going on. The punchline is that we realized that this is the colors of the Fisher-Price magnet set on the refrigerators that were popular during the ’70s and ’80s and then essentially died out. And so it turns out that when I look across all these tens of thousands of synesthetes, it’s just those people who were kids in the late ’60s and ’70s and ’80s that imprinted on the Fisher-Price magnet set. And that’s their synesthesia. And then, as its popularity died out, there aren’t any more who have that particular pattern.
LEVITT: Now, I have to imagine that the way we teach in traditional classrooms, with a teacher or professor at a blackboard lecturing to a huge group of passive students — as a neuroscientist, that must make you cringe, right?
EAGLEMAN: It does increasingly, yes.
LEVITT: How should we teach?
EAGLEMAN: I think the next generation is going to be smarter than we are simply because of the broadness of the diet that they can consume. Whenever they’re curious about something, they jump on the internet, they get the answer straight away or from Alexa or from ChatGPT. They just get the answers. And that is massively useful for a few reasons. One is that, when you are curious about something, you have the right cocktail of neurotransmitters present to make that information stick. So if you get the answer to something in the context of your curiosity, then it’s going to stay with you. Whereas you and I grew up in an era where we had lots of just-in-case information.
LEVITT: What do you mean by that?
EAGLEMAN: Oh, you know, like just in case you ever need to know that the Battle of Hastings happened in 1066, here you go.
LEVITT: And you want to contrast that with just-in-time information. I need to know how to fix my car, and so the internet tells me, and then I can really remember it because I need it.
EAGLEMAN: That’s exactly it. And so look, you know, for all of us with kids — I know you’ve got kids, I’ve got kids — and we feel like, “Oh, my kid’s on YouTube and wasting time.” There’s a lot of amazing resources and things that they learn on YouTube or even on TikTok — anywhere. There’s lots of garbage, of course, but it’s better than what we grew up with. When you and I wanted to know something, we would ask our mothers to drive us down to the library and we would thumb through the card catalog and hope there was something on it there that wasn’t too outdated.
LEVITT: You were more ambitious than me. I would just ask my mother. And I have since learned that every single thing my mother taught me was completely wrong. But I still believe them, because of this part of the brain that locks in things that you learned long ago — I still have to fight every day against the falsehoods my mother taught me. I wish I had told her to take me to the library.
EAGLEMAN: My mother was a biology teacher and my father was a psychiatrist. And so they had all kinds of good information. I’m just super optimistic about the next generation of kids. Now, as far as how we teach, things got complicated with the advent of Google, and now it’s twice as complicated with ChatGPT. Happily, we already learned these lessons 20 years ago. What we need to do is just change the way that we ask questions of students. We can no longer just assume that a fill-in-the blank or even just writing a paper on something is the optimal way to have them learn something, but instead they need to do interactive projects, like run little experiments with each other. And, you know, the kind of thing that you and I both love to do in our careers, which is, “Okay, go out and find this data and run this experiment, and see what happens here.” That’s the kind of opportunities that kids will have now.
You’re listening to a special bonus episode of People I (Mostly) Admire, with Steve Levitt and the neuroscientist David Eagleman. After the break: what are large language models missing?
EAGLEMAN: It has no theory of mind. It has no physical model of the world the way that we do.
That’s coming right up, after the break.
* * *
David Eagleman is a professor, a C.E.O., leader of a nonprofit called the Center for Science and Law. Host of TV shows on PBS and Netflix. And the founder of Possibilianism.
EAGLEMAN: Like every curious person trying to figure out what we’re doing here, what’s going on, it just feels like there are two stories. Either there’s some religion story, or there’s the story of strict atheism — which I tend to agree with, but it tends to come with this thing of, “Look, we’ve got it all figured out. There’s nothing more to ask here.” There is a middle position, which people call agnosticism, but usually that means, “I don’t know, I’m not committing to one thing or the other.” I got interested in defining this new thing that I call Possibilianism, which is to try to go out there and do what a scientist does, which is an active exploration of the possibility space. What the heck is going on here? We live in such a big and mysterious cosmos. Everything about our existence is sort of weird. Obviously the whole Judeo-Christian tradition, that’s one little point in that possibility space, or the possibility that there’s absolutely nothing and we’re just atoms and we die. But there’s lots of other possibilities and so I’m not willing to commit to one team or the other without having sufficient evidence. So that’s why I call myself a Possibilian.
LEVITT: And so in support of Possibilianism — maybe a better name could be in order — you wrote a book called Sum, that’s S-U-M, so it’s Sum: 40 Tales from the Afterlives. How do you describe the book to people?
EAGLEMAN: I call it literary fiction. It’s 40 short stories that are all mutually exclusive. They’re all pretty funny, I would like to think. But they’re also kind of gut wrenching. And what I’m doing is shining the flashlight around the possibility space. None of them are meant to be taken seriously, but what the exercise of having 40 completely different stories gives us is a sense of, “Wow, actually, there’s a lot that we don’t know here.” In some of the stories, God is a female. In some stories, God is a married couple. In some stories, God is a species of dimwitted creatures. In one story, God is actually the size of a bacterium, and doesn’t know that we exist. And in lots of stories, there’s no God at all. That book is something I wrote over the course of seven years, and became an international bestseller. It’s really had a life to it that I wouldn’t have ever guessed.
LEVITT: When I heard about the book, I saw the subtitle and thought, “I have zero interest in reading a book about the afterlife.” I totally misunderstood what the book was about. And then I certainly didn’t understand that “Sum” was Latin.
EAGLEMAN: Sum actually I chose because, among other things, that’s the title story. In the afterlife, you relive your life, but all the moments that share a quality are grouped together. So you spend three months waiting in line, and you spend 900 hours sitting on the toilet, and you spend 30 years sleeping.
LEVITT: All in a row.
EAGLEMAN: Exactly. And this amount of time looking for lost items, and this amount of time realizing you’ve forgotten someone’s name, and this amount of time falling, and so on. Part of why I use the title Sum is because of the sum of events in your life like that. Part of it was because cogito ergo sum. So it ended up just being the perfect title for me, even if it did lose a couple of readers there, yeah.
LEVITT: People are super excited right now about these generative A.I. models, the large language models. What’s your take on it?
EAGLEMAN: Essentially, these artificial neural networks took off from a very simplified version of the brain, which is, “Hey, look, you’ve got units and they’re connected, and what if we can change the strength between these connections?” And in a very short time, that has now become this thing that has read everything ever written on the planet and can give extraordinary answers. But it’s not yet the brain, or anything like it. It’s just taking the very first idea about the brain and running with it. What a large language model does not have is an internal model of the world. It’s just acting as a statistical parrot. It’s saying, “Okay, given these words, what is the next word most likely to be, given everything that I’ve ever read on the planet.” And so it’s really good at that, but it has no model of the world, no physical model. And so things that a 6-year-old can answer, it is stuck on. Now, this is not a criticism of it, in the sense that it can do all kinds of amazing stuff and it’s going to change the world. But it’s not the brain yet, and there’s still plenty of work to be done to get something that actually acts like the brain.
LEVITT: Do you think that it is a solvable problem, to give these models a theory of mind, a model of the world?
EAGLEMAN: I suspect so, because there are 8.2 billion of us who have this functioning in our brains, and as far as we can tell, we’re just made of physical stuff. We’re just very sophisticated algorithms, and it’s just a matter of cracking what that algorithm is.
LEVITT: If we were to come back in 100 years, what do you think will be most different? I know it’s a hard prediction to make, but what do you see as transforming most in the areas you work in?
EAGLEMAN: The big textbook that we have in our field is called Principles of Neural Science, and it’s about 900 pages. It’s not actually principles; it’s just a data dump of all this crazy stuff we know. And in a hundred years I expect it’ll be, like, 90 pages. We’ll have things where we put big theoretical frameworks together. We say, “Ah, okay, look, all this other stuff, these are just expressions of this basic principle that we have now figured out.”
LEVITT: Do you pay much attention to behavioral economics?
EAGLEMAN: Yes, I do.
LEVITT: What do you think of it?
EAGLEMAN: Oh, it’s great. And that’s probably the direction that a lot of fields will go is: how do humans actually behave? One of the big things that I find most interesting about behavioral economics comes back to this issue about the team of rivals. When people measure in the brain how we actually make decisions about whatever, there are totally separable networks going on. Some networks care about the valuation of something, the price point. You have totally other networks that care about the anticipated emotional experience about something. You have other networks that care about the social context — like, what do my friends think about this? You have mechanisms that care about short-term gratification. You have other mechanisms that are thinking about the long term — what kind of person do I want to be? All these things are battling it out under the hood. It’s like the Three Stooges, sticking each other in the eye and wrestling each other’s arms and stuff. But what’s fascinating is when you’re standing in the grocery store aisle, trying to decide, you know, which flavor of ice cream you’re going to buy, you don’t know about these raging battles happening under the hood. You just stand there for a while and then you say, “Okay, I’ll grab this one over here.”
LEVITT: There was a point in time among economists that there was a lot of optimism that we could really nail macroeconomics, inflation and interest rates and whatnot, and we could really understand how the system worked. And I think there’s been a real step back from that. The view now is, look, it’s an enormously complex system. And we’ve really, I guess, given up in the short run. Are you at all worried that’s where we’re going with the brain?
EAGLEMAN: Oh, gosh, no. And the reason is because we’ve got all these billions of brains running around. What that tells us is it has to be pretty simple in principle. You got 19,000 genes. That’s all you’ve got. Something about it has to be as simple as falling off a log for it to work out very well, so often, billions of times.
They say as you get older, it’s important to keep challenging your brain by learning new things, like a foreign language. I can’t say I found learning German to be all that much fun, and I definitely have not turned out to be very good at it. So I’ve been looking for a new brain challenge. And I have to say, I find echolocation very intriguing. How cool would it be to be able to see via sound? I suspect, though, that my aptitude for echolocation will be on par with my aptitude for German. So if you see me covered in bruises, you’ll know why. If you want to learn more about David Eagleman’s ideas, I really enjoyed a couple of his many books, like Livewired, which talks about his brain research, and Sum: Forty Tales from the Afterlives, his book of speculative fiction.
* * *
Hey there, it’s Stephen Dubner again. I hope you enjoyed this special episode of People I (Mostly) Admire. I loved it. And I would suggest you go right now to your podcast app and follow the show: People I (Mostly) Admire. We will be back very soon with more Freakonomics Radio. Until then, take care of yourself and, if you can, someone else too.
* * *
Freakonomics Radio and People I (Mostly) Admire are produced by Stitcher and Renbud Radio. This episode was produced by Morgan Levey, with help from Lyric Bowditch and Daniel Moritz-Rabson; it was mixed by Jasmin Klinger. Our staff also includes Alina Kulman, Augusta Chapman, Dalvin Aboagye, Eleanor Osborne, Ellen Frankman, Elsa Hernandez, Gabriel Roth, Greg Rippin, Jason Gambrell, Jeremy Johnston, Jon Schnaars, Neal Carruth, Rebecca Lee Douglas, Sarah Lilley, Theo Jacobs, and Zack Lapinski. Our composer is Luis Guerra. As always, thank you for listening.
LEVITT: David, you got your QuickTime going?
EAGLEMAN: Um, I do … now!
Sources
- David Eagleman, professor of cognitive neuroscience at Stanford University and C.E.O. of Neosensory.
Resources
- Livewired: The Inside Story of the Ever-Changing Brain, by David Eagleman (2020).
- “Why Do We Dream? A New Theory on How It Protects Our Brains,” by David Eagleman and Don Vaughn (TIME, 2020).
- “Prevalence of Learned Grapheme-Color Pairings in a Large Online Sample of Synesthetes,” by Nathan Witthoft, Jonathan Winawer, and David Eagleman (PLoS One, 2015).
- Sum: Forty Tales from the Afterlives, by David Eagleman (2009).
- The vOICe app.
- Neosensory.
Extras
- “Feeling Sound and Hearing Color,” by People I (Mostly) Admire (2024).
- “What’s Impacting American Workers?” by People I (Mostly) Admire (2024).
- “This Is Your Brain on Podcasts,” by Freakonomics Radio (2016).
Comments