Search the Site

Episode Transcript

I love podcast guests who changed the way I think about some important aspect of the world. A great example is my guest today, David Eagleman. He’s a Stanford neuroscientist whose work on brain plasticity has completely transformed my understanding of the human brain and its possibilities.

EAGLEMAN: The key thing about the human brain is it’s about three pounds. It’s locked in silence and darkness. It has no idea where the information is coming from because everything is just electrical spikes and also chemical releases as a result of those spikes. And so what you have in there is this giant symphony of electrical activity going on, and its job is to create a model of the outside world.

Welcome to People I (Mostly) Admire, with Steve Levitt.

According to Eagleman, the brain is constantly trying to predict the world around it. But of course the world is unpredictable and surprising. So the brain is constantly updating its model. The capacity of our brains to be ever changing is usually referred to as plasticity. But Eagleman offers another term: livewired. That’s where conversation begins.

*      *      *

EAGLEMAN: “Plasticity” is the term used in the field because the great neuroscientist — or psychologist actually — William James coined the term because he was impressed with the way that plastic gets manufactured, where you mold it into a shape and it holds onto that shape. And he thought, that’s kind of like what the brain does. The great trick that Mother Nature figured out was to drop us into the world half baked. If you look at the way an alligator drops into the world, it essentially is pre-programmed. It eats, mates, sleeps, does whatever it’s doing. But we spend our first several years absorbing the world around us based on our neighborhood and our moment in time and our culture and our friends and our universities. We absorb all of that, such that we can then springboard off of that and create our own things. There are many things that are essentially pre-programmed in us, but we are incredibly flexible and that is the key about livewiring. When I ask you to think of the name of your fifth grade teacher, you might be able to pull that up, even though it’s been years since you saw that fifth grade teacher, but somehow there was a change made in your brain and that stayed in place. We’ve got 86 billion neurons. Each neuron is as complicated as a city. This entire forest of neurons every moment of your life is changing. It’s reconfiguring. It’s strengthening connections here and there. It’s actually unplugging over here and replugging over there. And so that’s why I’ve started to feel that the term “plasticity” in homage to plastic is maybe underreporting what’s going on. And so that’s why I made up the term “livewiring.”

LEVITT: So let me pose a question to listeners: Imagine you have a newborn baby. And he or she looks absolutely perfect and flawless on the outside, but then upon examination, the doctors discover that half of his or her brain is just missing. A complete hemisphere of the brain, it’s never developed. It’s just empty space. I would expect that would be a fatal defect, or at best, the child would be growing up profoundly mentally disabled.

EAGLEMAN: Turns out the kid will be just fine. You can be born without half the brain, or you can do what’s called a hemispherectomy, which happens to children who have something called Rasmussen’s encephalitis, which is a form of epilepsy that spreads from one hemisphere to the other — the surgical intervention for that is to remove half the brain.

LEVITT: Which is completely absurd.

EAGLEMAN: Yeah, you can just imagine as a parent, the horror you would feel if your child had to go in for something like that. But you know what? Kid’s just fine. I can’t take my laptop and rip out half the motherboard and expect it to still function. But with a brain, with a livewired system, it’ll work.

LEVITT: When I went to school, I feel like they taught me the brain was organized around things like senses and emotions, that there were these different parts of the brain that were good for those things. But you make the case that there’s a very different organization of the brain. It’s got some problems it needs to solve, like, am I in danger? How do I move my body around? Is there something else moving around me? And it takes vision or hearing and makes a prediction. Could you riff on that a little bit?

EAGLEMAN: It is organized around the senses, but the interesting thing is that the cortex, this wrinkly outer bit, is actually a one-trick pony. It doesn’t matter what you plug in. It’ll say, “Okay, got it. I’ll just wrap myself around that data and figure out what to do with that data.” It turns out that in almost everybody, you have functioning eyeballs that plug into the back of the head, and so we end up calling the back part of the brain the visual cortex. We call this part the auditory cortex, and this the somatosensory cortex that takes in information from the body and so on. So what you learned back in high school or college is correct, most of the time. But what it overlooks is the fact that the brain is so flexible. So if a person goes blind or is born blind, that part of the brain that we’re calling the visual cortex, that gets taken over by hearing, by touch, by other things. And so it’s no longer visual cortex. The same neurons that are there are now doing a totally different job. They’re involved in other processes. The flexibility of the brain is that, you know, usually it comes out this way, but fundamentally, what it’s trying to do is take whatever data comes in and say, “Okay, I got it. This helps me to figure out how to move my way through the world.”

LEVITT: So I first came to your work because I was so blown away by the idea of human echolocation only to discover that echolocation is only the tip of the iceberg. But could you talk just a bit about echolocation and how quickly with training it can start to substitute for sight?

EAGLEMAN: So it turns out that blind people can make all kinds of sounds — either with their mouth, like clicking or the tip of their cane or snapping their fingers, anything like this — and they can get really good at determining what is coming back as echoes to them and figure out, “Oh, okay, this is an open space in front of me. Here, there’s something in front of me. It’s probably a parked car. And oh, there’s a little gap between two parked cars here, so I can go in here.” The key is the visual part of the brain is no longer being used because, for whatever reason, there’s no information coming down those pipelines anymore. So that part of the brain is taken over by audition, by hearing, and by touch and other things. What happens is that the blind person becomes really good at these other things because they’ve just devoted more real estate to it. And as a result, they can pick up on all kinds of cues that would be very difficult for me and you because our hearing just isn’t that good.

LEVITT: Okay, now that all makes sense. But then in these studies, you put a blindfold on a person for two or three days, and you try to teach them echolocation. And if I understand correctly, even over that time scale of two or three days, the echolocation starts taking over the visual part of the brain. Is that a fair assessment?

EAGLEMAN: That is exactly right. These were my colleagues at Harvard. They did this over the course of five days. They demonstrated that people could get really good at — there are actually a number of studies like this — they can get really good at reading braille. They can do things like echolocation. And the speed of it was sort of the surprise. But the real surprise for me came along when, again, my colleagues at Harvard did this study where they blindfolded people tightly and put them in the brain scanner and they were doing things like making sounds or touching the hand. And they were starting to see activity in the visual cortex after 60 minutes of being blind. And that blew my mind. I couldn’t believe it was so rapid that you start to see changes in an hour.

LEVITT: So in your book, you talk about REM sleep. And honestly, if I had sat down and tried to come up for an explanation of REM sleep, I could have listed a thousand ideas. Your pet theory would not be one of them. So explain what REM sleep is and then tell me why you think we do it.

EAGLEMAN: So REM sleep is rapid eye movement sleep. We have this every night, about every 90 minutes. And that’s when you dream. So if you wake someone up when their eyes are moving rapidly, and you say, “Hey, what are you thinking about?” They’ll say, “Well, I was just, riding a camel across a meadow.” But if you wake them up at other parts of their sleep, they typically won’t have anything going on. So that’s how we know we dream during REM sleep. But here’s the key. My student and I realized that at nighttime, when the planet rotates — we spend half our time in darkness. And obviously, we’re very used to this electricity-blessed world, but think about this in historical time over the course of hundreds of millions of years. It’s really dark. I mean, half the time, you are in blackness. Now, you can still hear and touch and taste and smell in the dark, but the visual system is at a disadvantage whenever the planet rotates into darkness. And so, given the rapidity with which other systems can encroach on that, what we realized is it needs a way of defending itself against takeover every single night. And that’s what dreams are about. So what happens is you have these midbrain mechanisms that simply blast random activity into the visual cortex every 90 minutes during the night. And when you get activity in the visual cortex, you say, “Oh, I’m seeing things.” And because the brain is a storyteller, you can’t activate all the stuff without feeling like there’s a whole story going on there. But the fascinating thing is, when you look at this circuitry carefully, it’s super specific, much more specific than almost anything else in the brain. It’s only hitting the primary visual cortex and nothing else. And so that led us to a completely new theory about dreams. And what we did after that is we studied 25 different species of primates, and we looked at the amount of REM sleep they have every night, and we also looked at how plastic they are. It turns out that the amount of dream sleep that a creature has exactly correlates with how plastic they are, which is to say: if your visual system is in danger of getting taken over because your brain is very flexible, then you have to have more dream sleep. And by the way, when you look at human infants, they have tons of dream sleep at the beginning when their brains are very plastic. And as they age, the amount of dream sleep goes down. But the point is it’s a way of the visual system to defend itself at night against takeover.

LEVITT: Have you convinced the sleep scientists this is true, or is this just you believing it right now?

EAGLEMAN: Right now at the moment, there are 19 papers that have cited this and discuss this. And I think it’s right. I mean, look, everything can be wrong. Everything is provisional, but it’s the single theory that is quantitative. It’s the single theory about dreams that says not only here is an idea for why we dream, but we can compare across species and the predictions match exactly — which is to say, the more plastic the species, the more dream sleep it has to have at night. No one would have thought or suspected that you’d see a relationship between, you know, how long it takes you to walk or reach adolescence and how much dream sleep you have. But it turns out, that is spot on.

LEVITT: So we talked about echolocation, which uses sound to accomplish tasks that are usually done by vision. And you’ve started a company called Neosensory, which uses touch to accomplish tasks that are usually done with hearing. Can you explain the science behind that?

EAGLEMAN: Given that all the data running around in the brain is just data and the brain doesn’t know where it came from — all it knows is, “Oh, here are electrical spikes running around,” and it tries to figure out what to do with it — I got really interested in this idea of sensory substitution, which is: can you push information into the brain via an unusual channel? So what we did originally was we built a vest that was covered with vibratory motors and we captured sound for people who are deaf. So the vest captures sound, breaks it up from high to low frequency, and you’re feeling the sound on your torso. By the way, this is exactly what the inner ear does. It breaks up sound from high to low frequency and ships that off to the brain. So we’re just transferring the inner ear to the skin of the torso. And it worked. People who are deaf could come to hear the world that way. So I spun this out of my lab as a company, Neosensory, and we shrunk the vest down to a wristband. And we’re on wrists of deaf people all over the world. The other alternative for somebody who’s deaf is a cochlear implant, an invasive surgery. This is much cheaper and does as good a job.

LEVITT: So what happens, just to make sure I understand it — sounds happen and this wristband hears the sounds and then shoots electrical impulses into your wrist that correspond to the high and low frequency?

EAGLEMAN: It’s actually just a vibratory motor. So it’s just like the buzzer in your cell phone, but we have a string of these buzzers all along your wrist. And we’re actually taking advantage of an illusion, which is if I have two motors next to each other and I stimulate them both, you will feel one virtual point right in between. And as I change the strength of those two motors relative to each other, I can move that point around. So we’re actually stimulating 128 virtual points along the wrist.

LEVITT: And how do people learn? Do people train? You give them very direct feedback? Or is it more organic?

EAGLEMAN: Great question. It started off where we were doing a lot of training on people. And what we realized is it’s all the same if we just let it be organic. The key is we just encourage people, “Be in the world.” And that’s it. So, you see the dog’s mouth moving and you feel the barking on your wrist, or you close the door and you feel that on your wrist, or you say something — you know, most deaf people can speak, and they know what their motor output is and they’re feeling the input.

LEVITT: They’re hearing their own voice for the first time through this. Oh God, yeah, that’s interesting.

EAGLEMAN: And by the way, that’s how you learned how to use your ears too. You know, when you’re a baby, you’re watching your mother’s mouth move and you’re hearing data coming in your ears and you, you know, clap your hands together and you hear something in your ears. It’s the same idea. You’re just training up correlations in the brain about, “Oh, this visual thing seems to always go with that auditory stimulus.”

LEVITT: So then, it seems like if I’m deaf and I see the dog’s mouth moving, and I now associate that with the sound, do the people say that they hear the sound where the dog is? Or is the sound coming from their wrist?

EAGLEMAN: For the first few months, you’re hearing it on your wrist. You can get pretty good at these correlations. But then after about six months, if I ask somebody, “Look, when the dog barks, do you feel something on your wrist?” And you think, ‘Okay, what was that? Oh, that must’ve been a dog bark,’ and then you look for the dog. And they say, “No, I just hear the dog out there.” And that sounds so crazy, but remember that’s what your ears are doing. Your ears are capturing vibrations of the eardrum that moves from the middle ear to the inner ear, breaks up to different frequencies, goes off to your brain, goes to your auditory cord. It’s this giant pathway of things. And yet, even though you’re hearing my voice right now inside your head, you think I’m somewhere else. And that’s exactly what happens, irrespective of how you feed the data in.

LEVITT: So you also have a product that helps with tinnitus. Could you explain both what that is and how your product helps?

EAGLEMAN: So tinnitus is a ringing in the ears. It’s like [“beep”]. And about 15 percent of the population has this. And for some people it’s really, really bad. It turns out there is a mechanism for helping with tinnitus, which has to do with playing tones and then matching that with stimulation on the skin. So people wear the wristband, it’s exactly the same wristband, but we have the phone play tones, “boop, boop, boop, boop, boop, boop.” And you’re feeling that all over your wrist and you just do that for 10 minutes a day. And it drives down the tinnitus. Now, why does that work? There are various theories on this, but I think the simplest version is that your brain is figuring out, “Okay, real sounds always cause this correlating vibration on my wrist, but a fake sound, [“beep”] you know, this thing in my head, that doesn’t have any verification on the wrist. And so that must not be a real sound.” So because of issues of brain plasticity, the brain just reduces the strength of the tinnitus because it learns that it’s not getting any confirmation that that’s a real world sound.

LEVITT: Now, how did you figure out that this bracelet could be used for this?

EAGLEMAN: This was discovered by a woman named Susan Shore, who’s a researcher who discovered this about a decade ago. And she was using electrical shocks on the tongue. And there’s actually another company that’s spun out called Lenire that does this with sounds in the ear and shocks on the tongue. They had an argument that they think it had to be touch from the head and the neck and I didn’t buy that at all, and that’s why I tried that with the wristband. So this was not an original idea for us except to try this on the wrist and it works equally as well.

LEVITT: So what we’re talking about is substituting between senses. Are there other forms of this? Products that are currently available to consumers or likely to become available soon in this space?

EAGLEMAN: For people who are blind, for example, there are a few different approaches to this. One is called the BrainPort and that’s where, for a blind person, they have a little camera on their glasses and that gets turned into little electrical stimulation on the tongue. So you’re wearing this little electro-tactile grid on your tongue and it tastes like pop rocks sort of in your mouth. Blind people can get pretty good at this. They can navigate complex obstacle courses or throw a ball into a basket at a distance because they can come to see the world through their tongue — which, if that sounds crazy, it’s the same thing as seeing it through these two spheres that are embedded in your skull. It’s just capturing photons and information about them, figuring out where the edges are, and then shipping that back to the brain. The brain can figure that out. There’s also a colleague of mine that makes an app called vOICe. It uses the phone’s camera and it turns that into soundscape. So if you’re moving the camera around, you’re hearing bzzz-ooo-eee. You know, it sounds like a strange cacophony, but it doesn’t take long, even for you as a sighted person, to get used to this and say, “Oh, okay, I’m turning the visual world into sound. And it’s starting to make sense when I pass over an edge or when I zoom into something, the pitch changes, the volume changes.” There’s all kinds of changes in the sound quality that tells you, “Oh yeah, now I’m going to close something. Now I’m getting far and here’s what the world looks like in sound.”

After a short break, David Eagleman and I return to talk about how we can get better data about our own brains.

*      *      *

LEVITT: So you’re the C.E.O. of Neosensory.

EAGLEMAN: Yep, that’s right.

LEVITT: What do you like better, being an academic or a C.E.O.? They’re very different activities that have very different skill sets.

EAGLEMAN: Being an academic. And the reason is I’m fundamentally an ideator and being an operator is something that is not quite as much fun for me. I mean, there are many aspects to it that I do enjoy and I think I’m not bad at it, but I don’t love it as much as spending 100 percent of my time doing the real creative work. In the future what I’m going to be doing is kicking off companies and finding the right team of people who will do the operations.

LEVITT: Elon Musk’s company Neuralink has gotten a ton of attention lately. Could you explain what they’re trying to do and whether you think that’s a promising avenue to explore?

EAGLEMAN: What they’re doing is they’re putting electrodes into the brain to read from and talk to the neurons there.

LEVITT: So what we’ve been talking about so far has been sending signals to the brain. But what Neuralink is trying to do is take signals out of the brain. Is that right?

EAGLEMAN: That is correct. Everything we’ve been talking about so far with sensory substitution, that’s a way of pushing information in, and non-invasive. And what Neuralink is — you have to drill a hole in the head to get to the brain itself, but then you can do reading and writing invasively. That actually has been going on for 60 years. The language of the brain is electrical stimulation, and so with a little tiny wire, essentially, you can zap a neuron and make it pop off, or you can listen to when it’s chattering along going pop, pop, pop, pop, pop, pop, pop, pop. There’s nothing actually new about what Neuralink is doing except that they’re making a one-ton robot that sews the electrodes into the brain so it can do it smaller and tighter and faster than a neurosurgeon can. And by the way, there are a lot of great companies doing this sort of thing with electrodes. As people get access to the brain, we’re finally getting to a point — we’re not there yet — but we’re getting to a point where we’ll finally be able to push theory forward. There’s really no shortage of theoretical ideas in neuroscience, you know, maybe the brain works this way or that way, but fundamentally we don’t have enough data. Because as I mentioned, you’ve got these 86 billion neurons all doing their thing, and we have never measured what all these things are doing at the same time. So we have technologies like functional Magnetic Resonance Imaging, fMRI, which measures big blobby volumes of, “Oh, there was some activity there and some activity there,” but that doesn’t tell us what’s happening at the level of individual neurons. We can currently measure some individual neurons, but not many of them. It’d be like if an alien asked one person in New York City, “Hey, what’s going on here?” And then tried to extrapolate to understand the entire economy of New York City and how that’s all working. So I think we’re finally getting closer to the point where we’ll have real data about, “Wow, this is what thousands or eventually hundreds of thousands or millions of neurons are actually doing in real time at the same moment.” And then we’ll be able to really get progress. I actually think the future is not in things like Neuralink, but the next level past that, which is going to be nanorobotics. This is all theoretical right now, but I don’t think this is more than 20, 30 years off — where you do three-dimensional printing, atomically precise, you make molecular robots, hundreds of millions of these, and then you put them in a capsule and you swallow the capsule. And these little robots swim around and they go into your neurons, these cells in your brain, and from there they can send out little signals saying, “Hey, this neuron just fired.” And once we have that sort of thing, then we can say non-invasively, “Here’s what all these neurons are doing at the same time.” And then we’ll really understand the brain.

LEVITT: I’ve worn a continuous glucose monitor a few times, so you stick this thing in your arm and you leave it there for ten days. And every five minutes, it gives you a reading of your blood-glucose level. And it gives you direct feedback on how your body responds to the foods you eat, also to stress or lack of sleep, that you simply don’t get otherwise. And I learned more about my metabolism in 10 days than I had over the entire rest of my life combined. What you’re talking about with these nanorobots is obviously in the future, but is there anything now that I can buy and I can strap on my head — and I know it’s not going to be individual neurons — but that would allow me to get feedback about my brainwaves and be able to learn in that same way I do with a glucose monitor?

EAGLEMAN: What we have now is EEG, electroencephalography, and there are several really good companies like Muse and Emotiv that have come out with at-home methods.You just strap this thing on your head and you can measure what’s going on with your brainwaves. The problem is that brainwaves are still pretty distant from the activity of 86 billion chattering neurons. An analogy would be if you went to your favorite baseball stadium and you attached a few microphones to the outside of the stadium and you listened to a baseball game, but all you could hear with these microphones is occasionally the crack of the bat and the roar of the crowd. And then your job is to reconstruct what baseball is just from these few little signals you’re getting. So I’m afraid it’s still a pretty crude technology.

LEVITT: I could imagine that I would put one of these EEGs on and I would just find some feeling I liked, bliss or peace or maybe it’s a feeling induced by drugs and alcohol, and I would be able to see what my brain patterns look like in those states. Then I could sit around and try to work towards reproducing those same patterns. Now, it might not actually lead to anything good, but in your professional opinion, total waste of time, me trying to do that?

EAGLEMAN: The fact is, if you felt good at some moment in your life and you sat around and tried to reproduce that, I think you’d do just as well thinking about that moment and trying to put yourself in that state rather than trying to match a squiggly line.

LEVITT: You know, I’m a big believer in data though, and it seems like somebody should be building A.I. systems that are able to look at those squiggles and give me feedback — I guess what I’m looking for is feedback. The thing that’s so hard about the brain is that we don’t get direct feedback about what’s going on, which is how the brain is so good at what it does. If the brain didn’t get feedback from the world about what it was doing, it wouldn’t be any good at predicting things. So I’m trying to find a way that I can get feedback. But it sounds like you’re saying I got to live for 20 more years if I want to hope to do that.

EAGLEMAN: I think that’s right. I mean, there’s also this very deep question about what kind of feedback is useful for you. So most of the action in your brain is happening unconsciously. It’s happening well below the surface of your awareness or your ability to access it. And the fact is that your brain works much better that way. Do you play tennis for example?

LEVITT: Not well.

EAGLEMAN: Or golf?

LEVITT: Golf I play.

EAGLEMAN: Okay good. So if I ask you, “Hey Steven, tell me exactly how you swing that golf club,” the more you start thinking about it, the worse you’re going to be at it because consciousness, when it starts poking around in areas that it doesn’t belong, it’s only going to get worse. And so it is an interesting question about the kind of things that we want to be more conscious of. I’m trying some of these experiments now, actually using my wristband, wearing EEG, and getting a summarized feedback on the wrist. So I don’t have to stare at a screen, but as I’m walking around during the day, I have a sense of what’s going on with this. Or, with the smartwatch, having a sense of what’s going on with my physiology. I’m not sure yet whether it’s useful or whether those things are unconscious because mother nature figured out a long time ago that it’s just as well if it remains unconscious. And so, one thing I’m doing, which is just a wacky experiment just to try it — the smartwatch is measuring all these things. We have that data going out, sending a signal to the internet and then to the wristband, but the key is you have someone else, like your spouse wear the smartwatch and you’re feeling her physiology. And I’m trying to figure out, is this useful tapped into someone else’s physiology? I don’t know if this is good or bad for marriages, but —

 LEVITT: Oh my God. What a nightmare.

 EAGLEMAN: Exactly. But I’m just trying to really get at this question of these unconscious signals that we experience, is it better if they’re exposed or better to not expose them?

LEVITT: What have you found empirically? 

EAGLEMAN: Empirically what I found is that married couples don’t want to wear it. I mean —

LEVITT: You need couples that are just falling in love ’cause they want to know more and more about each other. The married couples, I think, are the wrong target.

EAGLEMAN: Right, exactly.

LEVITT: So you’ve actually done it? What is the experience like?

EAGLEMAN: The problem is what you’re feeling is all kinds of summarized issues about your loved one’s heart rate and heart rate variability and galvanic skin response. But bodies are complicated because life is complicated. So in any given moment, you know, someone pulls in front of her in traffic and I feel that, maybe I’m across the nation but the data’s going across the internet so I can feel it anytime, but who knows why? Or she hears something funny on the radio and has a different signal blip. And I feel that, but it’s very difficult to reconstruct what is happening back in California.

LEVITT: How about when you’re together? Have you done it when you’re sitting in the same room and you can see her laughing and you can feel it?

EAGLEMAN: We haven’t really explored this as much as we should, because it’s like a little bit of an invasion of privacy. We’re so used to having most of our signals hidden from the world. No matter how open we feel we are as people, we’re actually mostly closed. Like all of your internal signals, other people don’t get to see that.

LEVITT: So in my lived experience, I walk around and there’s almost non-stop chatter in my head. It’s like there’s a narrator who’s commenting on what I’m observing in the world. My particular voice does a lot of rehearsing of what I’m going to say out loud in the future and a lot of rehashing of past social interactions. Other people have voices in their head that are constantly criticizing and belittling them. But either way, there’s both a voice that’s talking and there’s also some other entity in my head that’s listening to that voice and reacting. Does neuroscience have an explanation for this sort of thing?

EAGLEMAN: In my book Incognito, the way I cast the whole thing is that the right way to think about the brain is like a team of rivals. You know, Lincoln, when he set up his presidential cabinet, he set up several rivals in it, and they were all functioning as a team. That’s really what’s going on under the hood in your head is you’ve got all these drives that want different things all the time. So if I put a slice of chocolate cake in front of you, Steven, part of your brain says, “Oh, that’s a good energy source. Let’s eat it.” Part of your brain says, “No, don’t eat it. It’ll make me overweight.” Part of your brain says, “Okay, I’ll eat it, but I’ll go to the gym tonight.” And the question is, who is talking with whom here? It’s all you. But it’s different parts of you, all these drives that are constantly arguing it out. So the way to think about the brain, I suggest, is like a neural parliament where you have all these different political parties, all of whom love their country, but they all have different ways of going about driving it and what they think should happen. And so this is part of why you can talk to yourself. Another issue here is just that what you have in the brain is mostly internal feedback loops. So yes, you’re practicing everything. You’re both speaking and listening. It’s, by the way, generating activity in the same parts of the brain as listening and speaking that you would normally do. It’s just internal before anything comes out.

LEVITT: Is there somebody who’s in charge?

EAGLEMAN: No, there’s nobody who’s in charge. And so what happens is it’s always a voting scheme, like a parliament. So for example, there are some times when you will make one decision about that chocolate cake, and other times when you will make the opposite decision. Just depending on how the votes are going, and how hungry you are, how emotional you are, how whatever, you’ll come to a different conclusion, a different vote in the parliament.

LEVITT: Language is such an effective form of communicating and of summarizing information that, at least my impression inside my head, is that a lot of this is being mediated through language. But I also have this impression that there are parts of my brain that are not very good with language. And maybe I’m crazy, but I have this working theory that the language parts of my brain have really co-opted power. The non-speaking parts of my brain, they actually feel to me like the good parts of me, the interesting parts of me, but I feel like they’re essentially held hostage by the language parts. Does that make any sense?

EAGLEMAN: Well, this might be a good reason for you to keep pursuing possible ways to tap into your brain data. There was a study done many years ago where some scientists wanted to know: what are people’s thoughts exactly? So what they did is they gave everyone a little beeper that would just go off at totally random times during the day. So you’d be doing whatever you’re doing and then beep. And your instructions were, “Write down your thoughts immediately. What were you just thinking about when the beep went off?” It turned out, much to their surprise, that most thoughts were not verbal. Instead of, “I was thinking this to myself,” it was more like, ” I was reaching to pick up my phone and I was going to put my phone over here,” but there was nothing verbal about it. So language might not have had the total takeover that you think. And by the way, it turns out that the internal voice is on a big spectrum across the population, which is to say some people, like you, have a very loud internal radio. I happen to be at the other end of the spectrum where I have no internal radio at all. I never hear anything in my head. That’s called an endophasia. But everyone is somewhere along this spectrum. One of the points that I’ve always really concentrated on neuroscience is: what are the actual differences between people? And traditionally that’s been looked at in terms of disease states. But the question is: from person to person who are in the normal part of the distribution, what are the differences between us? It turns out those are manifold. So take something like how clearly you visualize when you imagine something. So if I ask you to imagine a dog running across a flowery meadow towards a cat, you might have something like a movie in your head. Other people have no image at all. They understand it conceptually, but they don’t have any image in their head at all. And it turns out, when you carefully study this, the whole population is smeared across the spectrum. So, our internal lives from person to person can be quite different.

LEVITT: So when you talk about this spectrum, it makes me think of synesthesia. Could you explain what that is and how that works?

EAGLEMAN: So I’ve spent about 25 years now studying synesthesia, and that has to do with some percentage of the population has a mixture of the senses. So, for example, they might look at letters on a page, and that triggers a color experience for them. Or they hear music, and that causes them to see some visual. Or they put some taste in their mouth and it causes them to have a feeling on their fingertips. There are dozens and dozens of forms of synesthesia, but what they all have to do with is a cross blending of things that are normally separate in the rest of the population.

LEVITT: And what share of the population has these patterns?

EAGLEMAN: So it’s about 3 percent of the population that has colored letters or colored weekdays or months or numbers.

LEVITT: Oh, it’s big. It’s interesting. I wouldn’t have thought it was so big.

EAGLEMAN: The crazy part is that if you have synesthesia, it probably has never struck you that 97 percent of the population does not see the world the way that you see it. Everyone’s got their own story going on inside, and it’s rare that we stop to consider the possibility that other people do not have the same reality that we do.

LEVITT: And what’s going on in the brain?

EAGLEMAN: In the case of synesthesia, it’s just a little bit of crosstalk between two areas that, in the rest of the population, tend to be separate but neighboring. So it’s like porous borders between two countries. So they just get a little bit of data leakage and that’s what causes them to have joint sensation of something.

LEVITT: People make a big deal out of it when they talk about musicians having this, and they imply that it’s helpful, that it makes them better musicians. Do you think there’s truth to that, or is it just that if 3 percent of the population has this, then there are going to be some great musicians among them?

EAGLEMAN: I suspect it’s the latter, which is to say everyone loves pointing out synesthetic musicians, but no one has done a study on how many deep sea divers have synesthesia or how many accountants have synesthesia. And so we don’t really know if it’s disproportionate among musicians.

LEVITT: So you’ve created this database of people who have the condition. And you find a pattern that is completely and totally bizarre, and that is that there’s a big bunch of people who associate the letter A with red, B with orange, C with yellow, it goes on and on. Then they start repeating at G. In general though, you don’t see any patterns at all. Like, people can connect these colors and letters in any way. Do you remember when you first found this pattern and what your thought was?

EAGLEMAN: So typically, as you said, it’s totally idiosyncratic. Each synesthete has his or her own colors for letters. So, my A might be yellow, your A is purple, and so on. And then what happened is, with two colleagues of mine at Stanford, we found in this database of tens of thousands of synesthetes that I’ve collected over the years, we found that starting in the late ’60s, there was some percentage of synesthetes who happened to share exactly the same colors. And these synesthetes were in different locations, but they all had the same thing. And then that percentage rose to about 15 percent in the mid ’70s.

LEVITT: So when you saw this, you must have been thinking, “My god, this is important,” right?

EAGLEMAN: Exactly, right. The question is: how could these people be sharing the same pattern? What we had always suspected is that maybe there was some imprinting that happens, which is to say, there’s a quilt in your grandmother’s house that has a red A and a yellow B and a purple C and so on. But, you know, everyone has different things that they grow up with, different posters that they grow up with as little kids. And so it was strange that this was going on. The punchline is that we realized that this is the colors of the Fisher-Price magnet set on the refrigerators that were popular during the ’70s and ’80s and then essentially died out. And so it turns out that when I look across all these tens of thousands of synesthetes, it’s just those people who were kids in the late ’60s and ’70s and ’80s that imprinted on the Fisher-Price magnet set. And that’s their synesthesia. And then, as its popularity died out, there aren’t any more who have that particular pattern.

LEVITT: It’s interesting because I am in that age group, and I don’t associate letters with colors, but I can still see those letters on the refrigerator.

EAGLEMAN: Likewise, but what that indicates is it’s not that the letters caused synesthesia; it’s that those people who had synesthesia imprinted on that exemplar.

LEVITT: Now, as a neuroscientist, I have to imagine that the way we teach in traditional classrooms, with a teacher or professor at a blackboard lecturing to a huge group of passive students — that must make you cringe, right?

EAGLEMAN: It does increasingly, yes.

LEVITT: How should we teach? 

EAGLEMAN: I think the next generation is going to be smarter than we are simply because of the broadness of the diet that they can consume. Whenever they’re curious about something, they jump on the internet, they get the answer straight away or from Alexa or from ChatGPT. They just get the answers. And that is massively useful for a few reasons. One is that, when you are curious about something, you have the right cocktail of neurotransmitters present to make that information stick. So if you get the answer to something in the context of your curiosity, then it’s going to stay with you. Whereas you and I grew up in an era where we had lots of just-in-case information.

LEVITT: What do you mean by that?

EAGLEMAN: Oh, you know, like just in case you ever need to know that the Battle of Hastings happened in 1066, here you go.

LEVITT: And you want to contrast that with just-in-time information. I need to know how to fix my car, and so the internet tells me, and then I can really remember it because I need it.

EAGLEMAN: That’s exactly it. And so look, you know, for all of us with kids — I know you’ve got kids, I’ve got kids — and we feel like, “Oh, my kid’s on YouTube and wasting time.” There’s a lot of amazing resources and things that they learn on YouTube or even on TikTok — anywhere. There’s lots of garbage, of course, but it’s better than what we grew up with. I mean, when you and I wanted to know something, we would ask our mothers to drive us down to the library and we would thumb through the card catalog and hope there was something on it there that wasn’t too outdated.

LEVITT: You were more ambitious than me. I would just ask my mother. And I have since learned that every single thing my mother taught me was completely wrong. But I still believe them because of this part of the brain that locks in things that you learn long ago — I still have to fight every day against the falsehoods my mother taught me. I wish I had told her to take me to the library.

EAGLEMAN: My mother was a biology teacher and my father was a psychiatrist. And so they had all kinds of good information. I’m just super optimistic about the next generation of kids. Now, as far as how we teach, things got complicated with the advent of Google and now it’s twice as complicated with ChatGPT. Happily, we already learned these lessons 20 years ago. What we need to do is just change the way that we ask questions of students. We can no longer just assume that a fill in the blank or now even just writing a paper on something is the optimal way to have them learn something, but instead they need to do interactive projects, like run little experiments with each other. And, you know, the kind of thing that you and I both love to do in our careers, which is, “Okay, go out and find this data and run this experiment and see what happens here.” That’s the kind of opportunities that kids will have now.

You’re listening to People I (Mostly) Admire with Steve Levitt and his conversation with David Eagleman. After this short break: why does David call himself a “Possibilian”?

*      *      *

David Eagleman is a professor, a C.E.O., leader of a nonprofit called the Center for Science and Law. Host of TV shows on PBS and Netflix. And the founder of Possibilianism.

EAGLEMAN: Like every curious person trying to figure out what we’re doing here, what’s going on, it just feels like there are two stories. Either there’s some religion story, or there’s the story of strict atheism — which I tend to agree with, but it tends to come with this thing of, “Look, we’ve got it all figured out. There’s nothing more to ask here.” There is a middle position, which people call agnosticism, but usually that means, “You know, I don’t know. I’m not committing to one thing or the other.” I got interested in defining this new thing that I call Possibilianism, which is to try to go out there and do what a scientist does, which is an active exploration of the possibility space. What the heck is going on here? We live in such a big and mysterious cosmos. Everything about our existence is sort of weird. And obviously the whole Judeo Christian tradition, that’s one little point in that possibility space, or the possibility that there’s absolutely nothing and we’re just atoms and we die and that’s certainly a spot over here in the possibility space. But there’s lots of other possibilities and so I’m not willing to commit to one team or the other without having sufficient evidence. So that’s why I call myself a Possibilian.

LEVITT: And so in support of Possibilianism — maybe a better name could be in order — you wrote a book called Sum: Forty Tales from the Afterlives. How do you describe it to people?

EAGLEMAN: I call it literary fiction. it’s 40 short stories that are all mutually exclusive. And they’re all pretty funny, I would like to think. But they’re also kind of gut wrenching. And what I’m doing is shining the flashlight around the possibility space. None of them are meant to be taken seriously, but what the exercise of having 40 completely different stories gives us is a sense of, “Wow, actually, there’s a lot that we don’t know here.” So obviously in some of the stories, God is a female. In some stories, God is a married couple. In some stories, God is a species of dimwitted creatures. In one story, God is actually the size of a bacterium and doesn’t know that we exist. And in lots of stories, there’s no God at all. That book is something I wrote over the course of seven years, and became an international bestseller. It blew up into lots of different languages. And it actually got turned into two operas. Brian Eno made an opera at the Sydney Opera House in Australia, and Max Richter made it into an opera at the Royal Opera House in London. It’s really had a life to it that I wouldn’t have ever guessed.

LEVITT: And can I tell you candidly — when I heard about the book, I saw the subtitle and thought, “I have zero interest in reading a book about the afterlife.” I totally misunderstood what the book was about. And then I certainly didn’t understand that “Sum” was Latin. 

EAGLEMAN: Sum actually I chose because, among other things, that’s the title story. In the afterlife, you relive your life, but all the moments that share a quality are grouped together. So you spend three months waiting in line and you spend 900 hours sitting on the toilet and you spend 30 years sleeping.

LEVITT: All in a row.

EAGLEMAN: Exactly. And this amount of time looking for lost items, and this amount of time realizing you’ve forgotten someone’s name, and this amount of time falling, and so on. Part of why I use the title Sum is because of the sum of events in your life like that. Part of it was because cogito ergo sum. So it ended up just being the perfect title for me, even if it did lose a couple of readers there.

LEVITT: If we were to come back in a hundred years, what do you think will be most different? I know it’s a hard prediction to make, but what do you see as transforming most in the areas you work in?

EAGLEMAN: So, as I said earlier, I think a big part of it is just what are these billions of neurons doing in real time when a brain is moving through the world? That’s part of it. You know, the big textbook that we have in our field is called Principles of Neural Science, and it’s about, I don’t know, it’s about 900 pages. It’s not actually principles; it’s just a data dump of all this crazy stuff we know. And in a hundred years I expect it’ll be like 90 pages. We’ll have things where we put big theoretical frameworks together. We say, “Ah, okay, look, all this other stuff, these are just expressions of this basic principle that we have now figured out.” We’re in a very interesting moment in time because we are still lacking the big picture of the principles going on.

LEVITT: Do you pay much attention to behavioral economics?

EAGLEMAN: Yes, I do.

LEVITT: What do you think of it?

EAGLEMAN: Oh, it’s great. And that’s probably the direction that a lot of fields will go is: how do humans actually behave? One of the big things that I find most interesting about behavioral economics comes back to this issue about the team of rivals. So when people measure in the brain how we actually make decisions about whatever, what you find is there are totally separable networks going on. So some networks care about the valuation of something, the price point. You have totally other networks that care about the anticipated emotional experience about something. You have other networks that care about the social context. Like, what do my friends think about this? You have mechanisms that care about short-term gratification. You have other mechanisms that are thinking about the longterm — what kind of person do I want to be? All these things are battling it out under the hood. It’s like the three stooges, sticking each other in the eye and wrestling each other’s arms and stuff. But what’s fascinating is when you’re standing in the grocery store aisle, trying to decide, you know, which flavor of ice cream you’re going to buy, you don’t know about these raging battles happening under the hood. You just stand there for a while and then you say, “Okay, I’ll grab this one over here.” So, I think there continues to be a very rich intersection between economics and neuroscience.

LEVITT: There was a point in time among economists that there was a lot of optimism that we could really nail macroeconomics, so inflation and interest rates and whatnot, and we could really understand how the system worked. And I think there’s been a real step back from that. And the view now is, look, it’s an enormously complex system. And we’ve really, I guess, given up in the short run. Are you at all worried that’s where we’re going with the brain?

EAGLEMAN: Oh, gosh, no. And the reason is because we’ve got all these billions of brains running around. What that tells us is it has to be pretty simple in principle. You got 19,000 genes. That’s all you’ve got. Something about it has to be as simple as falling off a log for it to work out very well, so often, billions of times.

LEVITT: People are super excited right now about these generative A.I. models, the large language models. What’s your take on it?

EAGLEMAN: I think they’re amazing and it’s such a pleasure to see where things are going. Essentially, these artificial neural networks took off from a very simplified version of the brain, which is, “Hey, look, you’ve got units and they’re connected. And what if we can change the strength between these connections?” That is the birth of artificial neural networks. And in a very short time, that has now become this thing that has read everything ever written on the planet and can give extraordinary answers. So it’s absolutely lovely what’s happened. But it’s not yet the brain or anything like it for a couple of reasons. It’s just taking the very first idea about the brain and running with it. What a large language model does not have is an internal model of the world. It’s just acting as a statistical parrot. It’s saying, “Okay, given these words, what is the next word most likely to be given everything that I’ve ever read on the planet.” And so it’s really good at that, but it has no idea. If you say, “Hey, look, when President Biden walks into a room, do his eyebrows come with him?” It can’t answer. It has no idea because it has no model of the world, no physical model. And so things that a 6-year-old can answer, it is stuck on. I’m actually writing an article right now on what I call the “intelligence echo illusion,” which is simply that a lot of people will ask a question to ChatGPT and then be amazed by its answer, but in fact it’s simply echoing something that someone has written before on that question that you asked. I did a calculation on this — turns out that if you read every day of your life, you can only read 1/1,000 of what ChatGPT has taken in. So there’s lots of questions that you might ask that you don’t even realize someone has asked and answered before, but of course it knows that, and so it tells you the answer. So people are looking at this echo of things that other humans have written and thinking, “Wow, this thing is amazing and smart.” It is not actually intelligent or sentient in the way that we are. It has no theory of mind the way that we do. It has no physical model of the world the way that we do. Now, this is not a criticism of it in the sense that it can do all kinds of amazing stuff and it’s going to change the world. But it’s not the brain yet and there’s still plenty of work to be done to get something that actually acts like the brain.

LEVITT: Do you think that it is a solvable problem, to give these models a theory of mind, a model of the world?

EAGLEMAN: I suspect so, because there are 8.2 billion of us who have this functioning in our brains, and as far as we can tell, we’re just made of physical stuff. So we’re just very sophisticated algorithms, and it’s just a matter of cracking what that algorithm is.

LEVITT: And when you talk about this theory of the mind, do you vaguely have the sense of the part of the brain that’s pre-programmed that we come with? Or you think that the theory of the mind is something that’s developed early on in life?

EAGLEMAN: Asking what part of the brain is involved in something is like if you said, “Here’s a map of New York City. Point to exactly where the economy is.” Well, the economy is something that comes out of the whole functioning of everything going on, that there’s no one spot for it. Essentially, every question in the brain is like that too. You can’t say, “Hey, point to the area for generosity or conspiracy theories.” It has to do with the whole network and every part of it. And that’s the same thing for how we build models of the world.

LEVITT: So, the existing large language models start with a blank slate and then they’re fed enormous amounts of information and they make their predictions off that. But what I was understanding you saying was, “Well, humans aren’t a blank slate. They have a model of the world, of the background through which we are able to make better predictions about the world because we know how to integrate the incoming flow of zeros and ones with this model.” So I guess I’m just wondering, if you were a researcher in this area, is the idea you would not start with a blank slate? Or do you think that has to emerge organically?

EAGLEMAN: It’s actually a slightly different issue, which is the architecture of the model. I don’t know that we start with a model of the world as such. It’s that we’re looking at all these things that impinge on our senses. And then we are learning, “Hey, the things that I do change what impinges on my senses. When I turn my head, that changes the visuals. When I say this to somebody, that changes how they react to me.” So our architecture is such that we’re learning the stimulus and response with the world in a sense. The architecture of a large language model is simply statistics of words. It’s just saying, “Hey, given these words, what’s the next word?”

LEVITT: So what we need to do is embed these LLMs into robots that can move around, and they have a utility function. They’re trying to accomplish things. And in that world, the models would get a lot more feedback because they’d actually care about what they’re producing in ways that would lead them to hopefully get better.

EAGLEMAN: Being embodied is certainly part of the issue, but it actually needs a different architecture than just an LLM. If you stuck something that just looks at “What is the next word?” in a body, it still is just looking for the next word. It doesn’t have the architecture to say, “Hey, how do my actions reflect on what comes back to me?”

LEVITT: It needs a different utility function. It needs to get rewards when it gets it right.

EAGLEMAN: It’s actually more than the utility function. It’s the architecture of saying “my job is to build a model internally of what I think is going on externally.” It’s a whole different architecture than we’re used to thinking about with artificial neural networks. It’s not just a utility function. It’s saying, “Hey, I’m trying to build a little dioramas — those things that we built as kids — of the whole world out there.” That’s really its job, such that it can move the pieces around and make good predictions, such that our hypotheses can die in our stead. You know, you test and evaluate everything internally, like you were saying with your internal voice, how you try out conversations and stuff like that. That’s what the brain is constantly doing.

They say as you get older, it’s important to keep challenging your brain by learning new things like a foreign language. I can’t say I found learning German to be all that much fun, and I definitely have not turned out to be very good at it. So I’ve been looking for a new brain challenge. And I have to say, I find echolocation very intriguing. How cool would it be to be able to see via sound? I suspect, though, that my aptitude for echolocation will be on par with my aptitude for German. So if you see me covered in bruises, you’ll know why. If you want to learn more about David Eagleman ideas, I really enjoyed a couple of his many books like Livewired, which talks about his brain research, and Sum: Forty Tales from the Afterlives, his book of speculative fiction.

LEVITT: So this is the point in the show where we take a listener question, and I welcome my producer Morgan. 

LEVEY: Hi Steve. A few weeks ago you had David Autor, an MIT labor economist on the show, and we solicited questions for David from listeners who had heard the episode and a listener wrote in with a question for you and David, and we brought the question to David, and you are going to paraphrase his response and add your own response to it. So the question is, “What do you think the chances are that the workers will control the means of AI production?” This listener says, “Sorry to bring Marx into this, but I worry that even If AI succeeds, the benefits will flow to the big corporations that own the data centers where the AI processing is done. Both of you seem to imagine futures where an individual increases their utility with AI, but you seem to ignore the likelihood that the individual will not be the one making the profit from that increased utility.” What’s your response, Steve?

LEVITT: First of all, I think that is a fantastic question. I actually agree with that. that thrust in that question. But David doesn’t, okay? and he has some good reasons for thinking that it’s not necessarily true that big corporations will control all of the flow of the profits from AI. And the first argument he makes, which is a really sensible empirical one, is that in the economy today, what’s surprising, and these are David’s words now, the richest countries in the world, the ones with the most technology, the most capital, the most infrastructure, the places where you’d expect labor to be potentially a small part of economic activity. These are the places where labor share of national income is the highest. In North America, 61 percent of national income is paid to workers. In Sub Saharan Africa, it’s 51 percent. David’s point there is that new technologies don’t necessarily lead to a shift in profits going to the owners of capital, the corporations away from workers. Okay. And that is sensible. Let me just now play the role of the question asker and say, yeah, that’s fine. But let’s take a very specific example. Let’s take driverless vehicles. The entire point of the driverless vehicle is that you don’t have to pay the labor to the driver of the car. I cannot imagine a world in which driverless vehicles don’t lead to the labor share going down. And the set of people who are driving vehicles right now, is absolutely huge. It’s one of the biggest professions in the entire country if you put truck drivers, taxi drivers, all of them together. So anyway, David didn’t get to answer that question because it wasn’t asked specifically, but I think that’s a real caveat on what he’s saying.

LEVEY: Okay, so let’s go back to David’s response. The listener brought up Marx in their question. And, in reference to Marx, David says: “Marx was pessimistic about the future of industrializing the economy. But I think he’d be surprised by how relatively well things have turned out in the century that followed his passing. Ironically, they’ve turned out much worse in countries that have attempted to implement Marxism.”

LEVITT: Touché. I mean, it’s very true. And one of the favorite things for economists to say, for all that people disparage capitalism, the countries that have not followed a capitalist model, the residents of those countries have not fared well, really, on any dimension that you’d want to be proud of. One thing we haven’t mentioned, Morgan, is what I thought was the most interesting and important part of David’s response. So let me read that word for word. David wrote, “I’m not betting that we’ll pull this off again.” And by that he means to build in a technology and do it in a way that helps workers. He says, “But I’m not betting against it either. And the first step in handling the risk and the opportunity is to recognize that the future is not a prediction problem. It’s a design problem.” And what David’s point is, look, it’s not as if we are passively sitting here, waiting to see whether AI destroys humanity. It’s that we’re actors in this game, in this activity, and we have the ability as citizens, as thinkers, as designers, to try to make AI serve people. and that’s an important point, because I have to say myself, I tend to think passively. I tend to think that technologies are too powerful and the forces that work in the economy are beyond anyone’s control. But I think David’s got a point, and it’s a kind of optimism that he brings to the problem that I think could be helpful.

LEVEY: If you have a question for us, our email is PIMA@Freakonomics.com. If you have a question for David Eagleman, the neuroscientist from this episode, we can try to bring that question to him. We can be reached at PIMA@Freakonomics.com. That’s P-I-M-A@Freakonomics.com. We read every email it’s sent. We look forward to reading yours.

Next week, be on the lookout for a replay of one of my all time favorite episodes. It’s an interview I did with Pete Docter back in 2021. He’s the chief creative officer at Pixar. And in two weeks, we’ve got a brand new episode featuring Neil deGrasse Tyson. He’s a Ph.D. astrophysicist who is arguably the greatest science popularizer of our generation.

TYSON: Mercury, Venus, Mars, Jupiter, Saturn, the Sun, and the Moon. So Sunday’s named for what? 

LEVITT: I think probably the sun.

TYSON: Yes, ah! Very good. Okay, Saturday? 

LEVITT: Saturday? The moon?

TYSON: No, Saturn! Saturn! These are not hard questions. 

As always, thanks for listening and we’ll see you back soon.

*      *      *

People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and The Economics of Everyday Things. All our shows are produced by Stitcher and Renbud Radio. This episode was produced by Morgan Levey with help from Lyric Bowditch, and mixed by Jasmin Klinger. We had research assistance from Daniel Moritz-Rabson. Our theme music was composed by Luis Guerra. We can be reached at pima@freakonomics.com, that’s P-I-M-A@freakonomics.com. Thanks for listening.

LEVITT: David, you got your Quick Time going?

EAGLEMAN: Um, I do… now!

Read full Transcript

Sources

  • David Eagleman, professor of cognitive neuroscience at Stanford University and C.E.O. of Neosensory.

Resources

Extras

Episode Video

Comments