Search the Site

Episode Transcript

DUBNER: We humans seem to like each other — except when we’re hating each other.

*      *      *

DUCKWORTH: I’m Angela Duckworth.

DUBNER: I’m Stephen Dubner.

DUCKWORTH + DUBNER: And you’re listening to No Stupid Questions.

Today on the show: Should we replace human umpires with robots?

DUBNER: If the purpose here is to get it right, then why on earth would we even want to have the humans around for that?

*      *      * 

DUCKWORTH: Stephen, I have a question for you that comes from watching my husband. Are you ready for this one?

DUBNER: I’m not sure, now that you put it that way.

DUCKWORTH: First of all, he’s fully clothed in these observations, and he’s seated in our living room and screaming. So, now he’s actually probably standing up, yelling at the television during the World Series, which, you know, is now in the rearview mirror, unfortunately.

DUBNER: Is this he’s a Philadelphia sports fan? Because they are the worst, the yelling-est, the loudest. But he doesn’t strike me as that.

DUCKWORTH: I mean, he is a sports fan, and he is in Philly, and he is rooting for Philadelphia teams. We do have a reputation.

DUBNER: Is that why he’s yelling though? The Phillies did lose the World Series this year.

DUCKWORTH: He yells in other games too — basketball games, for example. And what he’s yelling at is the umpire in the baseball game or the ref in the basketball game. And what he asked me the other day was like, “Why do we have these fallible human beings in charge of such consequential decisions when we live in the era of artificial intelligence.” And, I mean, why do we have human umpires at all? I guess that is the question.

DUBNER: So, that is an interesting topic for sure. There are a lot of directions we could go here. But, you know, let’s start with the baseball umpire question. And I think before we even answer why we still have human umpires, we should probably just explain what human umpires do. So, in baseball, there are four umpires in a regular game. Although, for the playoffs they use six. They put two more in the outfield as if to say, “Well, four isn’t really — like, when it’s important, we need more,” which is kind of a whole other crazy thing. But typically, there are four: one behind home plate, then a first-base, second-base, and third-base umpire. But it’s really, when people talk about “the ump,” they’re talking about the home-plate umpire.

DUCKWORTH: Yeah, that’s the only one I see. I didn’t even know there were three other ones.

DUBNER: They’re kind of blending in out there, but the home-plate umpire is very visible, because he is right there in the action. He’s crouching behind the catcher. So, for someone who doesn’t know baseball, here’s the way it works.

DUCKWORTH: Which would be me. So, this is good.

DUBNER: So, there’s a pitcher. You know what the pitcher does.

DUCKWORTH: He’s pitching the ball.  

DUBNER: He stands on a mound. He’s 60 feet, 6 inches away, which sounds like a lot, but when they’re throwing 95/100 miles an hour, it’s really not a lot. Then, there’s the batter. And then, there’s the catcher behind the batter. Then, right behind the catcher is the umpire. Then, there’s this rectangle that is supposed to represent the strike zone. It’s an imaginary rectangle, and it extends from — I believe it’s supposed to be from the armpits of the batter down to the knees of the batter. That’s the vertical. And then, the horizontal is supposed to cover the width of home plate, which is this five-sided slab of plastic that’s in the ground. So, you can imagine that the umpire is crouching behind the catcher, imagining this rectangle, armpit to knee, left side of the plate, right side of the plate. And if a pitch crosses the plate within that frame, it’s supposed to be a strike. And if it crosses the plate outside of that, it’s supposed to be a ball. Now, even crossing the plate is tricky because the ball is moving. It’s usually dropping. And it can be going to the left or the right. And so, to be really precise, as you can imagine, can be really hard. I should also say, if a batter swings at the pitch and misses, then it’s a strike.

DUCKWORTH: Regardless if it’s in the strike zone.

DUBNER: Exactly right. And if they foul-off a pitch that counts as a strike, and so on.

DUCKWORTH: I do know you have three strikes before you’re out and four balls before you get walked.

DUBNER Excellent. So, you can imagine that the batter cares a lot about whether a pitch that he doesn’t swing at is called a ball or a strike, and the pitcher also cares. The catcher cares. The team cares. The fans care. And you are right. There’s a ton of research showing umpires —  are quite fallible. So, we did a piece on Freakonomics Radio a few years ago about what’s called the “gambler’s fallacy.” Are you familiar with that phenomenon?

DUCKWORTH: You know, I have heard that defined in different ways. So, what is the Stephen Dubner definition?

DUBNER: Okay. So, let me take my shot at defining it. This is based on really nice research done by Toby Moskowitz, who’s an economist now at Yale. He co-authored a paper with Daniel Chen and Kelly Shue. It was called “Decision-Making Under the Gambler’s Fallacy: Evidence From Asylum Judges, Loan Officers, and Baseball Umpires.” So, if you think about those three categories of people — judges, loan officers, and umpires — they all have the authority to basically say “yes” or “no.”  

DUCKWORTH: And they’re, like, all ultimate authorities. You don’t, I think, easily appeal. I guess you could try, but generally their word is the final decision.

DUBNER: Yeah. And so, the gambler’s fallacy has to do with the way that we mistake how probability really works. Many of us, even really smart people, we find patterns that don’t exist, or we look for patterns where they shouldn’t exist. Let’s say you’re at the roulette table, and you’re playing red, and there are three spins in a row that come up black. The fourth spin, is it any more likely to come up red than the previous ones? No. It’s a totally independent variable. But we like to tell ourselves that, “Well, there were three blacks, so the next one is more likely to be red.” And so, we are constantly miscalculating probabilities in that way. And the way that the gambler’s fallacy would apply in the case of, like, a baseball umpire, or an asylum judge, or a loan officer, would be that our minds seem to want to toggle a little bit. We don’t want to have unnatural patterns. And so, what they found in their research is that a judge who had granted asylum to, let’s say, two asylum seekers in a row would be more likely to reject the next one, even though the evidence might have been in favor. And the same for loan officers, and the same for baseball umpires. In other words, if there are two strikes called in a row and the third pitch is pretty close, and maybe even in the strike zone, there’s something in the human mind that makes us a little bit reluctant to create these patterns that don’t make sense to us. And so, there are two problems then with the human umpire. Number one is they are susceptible to the gambler’s fallacy, but they’re also just not that good. And when I say, “not that good,” they’re way better than you or I would be, but they’re much worse than a computer would be. So, what the researchers did is they looked at thousands and thousands — maybe hundreds of thousands — of pitches over time. They looked at all these pitches where the batter didn’t swing. And they found that on the obvious balls and the obvious strikes, the umpires were basically 100 percent correct. If a pitch is right down the middle, and the batter doesn’t swing, and they call it a strike, they’re almost always right on those.

DUCKWORTH: But you don’t even need an umpire, hardly.

DUBNER: Exactly. But, here we go, this is Toby Moskowitz. He’s saying that on pitches that are just outside the strike zone — they’re definitely balls, but they’re close — on those pitches, he says, umpires only get those right about what percent would you say, Angie, what would you guess?

DUCKWORTH: Oh gosh. And this is comparing what the umpire says in real time with, like, careful review afterwards. Is that right? 

DUBNER: Exactly, and these are pitches that are just outside the strike zone.  

DUCKWORTH: Um, I don’t know. I’m going to give them, like, 90 to 95 percent, because they’re experts. It’s all they do. 

DUBNER: That’s a very, very, very nice and generous assessment. But the actual number is 64 percent.

DUCKWORTH: Wow. They get a D. I was going to give them an A.

DUBNER: So, their error rate is 36 percent. But I will say that in baseball, there are enough people who are as frustrated as Jason that there’s been a lot of movement toward automating it. And in fact, there are these kind of “robo umps.”

DUCKWORTH: Are there really?

DUBNER: There are. I haven’t seen any of these in real life yet, but I will tell you: they have worked their way up through the minor leagues in baseball, and it’s estimated that they may come to Major League Baseball as early as 2024.

DUCKWORTH: It’s not like an actual, you know, robot like Rosey from The Jetsons?

DUBNER: That’s a good question. From what I know, it could go either way or a variety of ways. In other words, you could actually have what looks like a robot out there making the call, but it would basically be, you know, a series of cameras, or radar, whatever it is. But one way that I’ve read that it may be done, which would make it, perhaps, more acceptable to the very tradition-bound game of baseball, would be that there still would be a human umpire who would have the assistance, in real time, of the cameras, and the detectors, and basically have an earpiece in, so that he would get, immediately, confirmation of what the right call is. Now, you might say, “Well, that seems really stupid. Why do you want to have the human still out there if the human is using the computer information?” But this gets us into the notion of how comfortable humans are with computers, or artificial intelligence, or machine learning, in all these different aspects of our lives that we’re used to doing ourselves. This could go from medical diagnosis, to cars and autonomous travel, to even having a robo-ump. So, I think then we get into your territory of psychology, which is thinking about how people feel about relying on technology for things that they’re used to doing for themselves. So, what can you tell us about that?

DUCKWORTH: Well, I started thinking about this in a serious way when a graduate student of mine named Benjamin Lira got very interested in artificial-intelligence approaches to analyzing data. And we were working on this data set. The data set was huge. And it was college admissions data. And it just dawned on both of us quite early that using a sophisticated A.I. algorithm, or even a really primitive algorithm, would probably be better in many cases than relying on one idiosyncratic, sleep-deprived human being. This is an idea that goes back. I mean, if you look at Danny Kahneman’s first job when he was in the Israeli army — this is decades and decades ago — and his task was, you know, “Help us figure out who to promote in the Israeli army,” he immediately recognized that the problem was that human beings were in charge of these promotions, and they were doing what human beings do, which is kind of pulling out of the air the things that they cared about on that particular day — never writing down why they made that decision, never justifying it, and then having no systematic approach across the whole army.

DUBNER: And maybe telling yourself stories ex-post about why that was the right decision when they were really just stories.

DUCKWORTH: He could see immediately, even if he didn’t have the phrase “confirmation bias,” that once you had made a decision that, “Yep, that soldier deserves to be promoted,” that you would then be searching for evidence to confirm that your original intuition was true. And his answer to this — his antidote, if you will — didn’t require neural networks, deep learning, computers, or anything else. He just said, “Hey, are there criteria that we can write down on a piece of paper? Maybe, say, half-a-dozen things that we think somebody who deserves a promotion ought to exemplify?” And once you write that down, you know: “conscientious,” “takes feedback well,” “empathic,” you know, “strategic,” whatever it is, you then spend just a few more moments saying, like, “Well, what does that look like? What would that look like to have? What would that look like to lack?” And just that little exercise of having a systematic approach of what we’re looking for, what it exactly looks like, is really what we would call today an “algorithm,” right? It’s a formula — as opposed to leaving it to, you know, the thoughts that are racing through your mind at that particular moment. And it turns out that that was not an easy sell to the Israeli army. It was like, “Wait, what? You want to have a systematic, rigid approach to promotion? Well, what about human judgment? What about all the intangibles? What about all the things that can’t be articulated?” But, you know, Danny Kahneman’s pretty persuasive, and he was able to convince the Israeli army to adopt what you could say is, like, the most primitive of algorithms. So this idea that we have to convince people to use algorithms is an idea that’s been around for a while. The “algorithm aversion” that people speak of today is a kind of fundamental distrust and dislike of robots or computers taking the place of human beings when making important decisions.

DUBNER: I think there are a lot of dimensions that people may dislike about that. I mean, part of it is we humans seem to like each other except when we’re hating each other.

DUCKWORTH: Yeah. I was going to say, “I don’t know, we also seem capable of hate.” But we have a fondness for each other that I think we don’t have for, you know, our laptop. There is a kind of emotion that we reserve for living things and especially other people.  

DUBNER: I think it’s already well established that computers are much better at reading mammograms than the humans who interpret them. Just think about the sheer volume. You can program a computer with millions of mammograms that show a positive result and millions that show a negative result, and no human could do anything close to that.

DUCKWORTH: Right. A radiologist is only going to see how many M.R.I.s in their career? But if you’re Google, you can see all of them.

DUBNER: It’s also really hard to update the priors and to keep current the learning on a human radiologist as much as we have, you know, adult licensing, and relicensing, and so on, computers just learn that kind of thing.

DUCKWORTH: Continuing medical education credits.

DUBNER: Yeah. So, Angela, I would love to hear from our listeners about an area in their lives where they are eager to accept more automation.

DUCKWORTH: Or which area you hate the idea!

DUBNER: Sure. Make a voice memo. Don’t make it too long. Tell us your name, where you live, stuff like that. Do it in a nice, quiet place. A lot of listeners get so inspired by this call out for voice memos while, for instance, running, or what sounds like using a hacksaw to cut up an old water boiler. There is a lot of noise going on. So, honestly, a little bit of quiet goes a long way in the voice-memo department, and I would love to hear what you all have to say. 

Still to come on No Stupid Questions: Stephen and Angela discuss how much control they’re actually willing to hand over to machines.

DUBNER: I, Stephen, am in fact AI-1743 XT Squared Delta Stephen, which is the avatar that Stephen Dubner has programmed to have this conversation with you.

*      *      *

Now, back to Stephen and Angela’s conversation about whether robots make better umpires than human beings.

DUBNER: Going back to baseball for a moment, Major League Baseball has recently been using video replays to look at close calls. And, you know, there are video replays now in professional soccer, in the N.F.L. In basketball, yes. In tennis, we should say, the chair umpire, as far as I know, most out calls are made by this computerized system called the Hawk-Eye System.

DUCKWORTH: Wait, what’s the person sitting in the you know, like, that little lifeguard stand? What are they doing then?

DUBNER: They are actually a lifeguard. Just in case the tennis court gets flooded, they want to make sure everybody’s going to get out okay. They are the chair umpire, and, you know, there’s some umpiring to do, and there are some other calls to be made, but tennis has certainly embraced the technology in that way. So baseball, fairly recently, began using video replays, and they found that in the cases where the calls are challenged or looked at again, they’re overturned nearly half the time.

DUCKWORTH: That goes back to that statistic, that sort of shocking D-grade.

DUBNER: Yeah. And these are for other calls, though. These are not just balls and strikes. These are for: you’re safe or you’re out, you know, within play or out of bounds, and so on. And so, to me, what’s shocking about that is you might naturally want to say, “Well, wait a minute. If the purpose here is to get it right, then why on earth would we even want to have the humans around for that?” On the other hand, I think there’s a lot which helps us understand this algorithm aversion. Joe Torre, who is a long-time baseball player and manager, I think he now works for the league. He’s against the robo-umps. His version is, “It’s an imperfect game and has always felt perfect to me.”  

DUCKWORTH: That’s so beautiful.  

DUBNER: So, to me, when I think this through, I get both sides very, very much. I think you could easily remove the home plate-umpire from baseball — maybe the other umpires too. If the goal is more accuracy in pitch-calling, then robots would definitely do it better. But let’s not forget it’s a game, right? This is not reading mammograms. Games are built on this wonderful shared history of ritual, and rules, and idiosyncrasies.

DUCKWORTH: And imperfections, right?

DUBNER: Yeah. There’s value in components of those rituals, and so even if they don’t produce the most scientifically-accurate calls, the question is, how much you are willing to lose some of those rituals. And there’s also the fact that umpires are all different from each other. They have different personalities, they have different strike zones — which is a matter of huge interest and sometimes controversy in baseball.

DUCKWORTH: You mean, like, some are hard asses and say it’s got to be, you know, within this narrow rectangle and others are more generous?

DUBNER: There’s that, but then there are other nuances. For instance, superstars — both pitchers and batters — get a lot of advantages from the umpires.

DUCKWORTH: Like, easier calls because they’re starstruck?

DUBNER: I wouldn’t say starstruck. I would say this is just a cognitive bias. Let me retrace this a little bit. In all sports, the home team has what’s called a “home-field advantage,” and you could imagine many, many, many, many reasons that contribute to this, right? You’re more familiar with that playing field or course, sleep in your own bed, eat your own food.

DUCKWORTH: I think there’s research showing that if you play in your own time zone, you’re better off.

DUBNER: But if you actually measure each of those factors, which by the way, Toby Moskowitz, the same Yale economist who looked at the ball/strike counts, did this to look at home-field advantage around the world — different sports, different circumstances. He found that there was one factor that was the major explanatory factor of home-field advantage. What do you think it is? 

DUCKWORTH: Well, I’m going to guess the umpire. 

DUBNER: Very good guess. So, it turns out that the single-biggest factor that can explain home-field advantage in just about any sport is the referee.

DUCKWORTH: Does that mean the referee is, like, literally biased because they live in Philadelphia and they want to call the play in favor of the Phillies, for example?

DUBNER: Sort of, but not quite. So, first of all, it wouldn’t be the case that the umpire at a Phillies game actually lives in Philly, because they’re professionals who travel around and take their jobs very seriously. There’s a lot of analysis, there’s a lot of critiquing, and feedback, and so on, but they also have a human mind. And the human mind, apparently, is susceptible to the vibe in the place. So, the fans and the feeling of wanting to make the home crowd happy seem to subconsciously influence the referee. And there’s really amazingly creative research that was done to conclude that. One was comparing soccer matches in stadiums that had the fans right on the field — on the pitch, where the referee can hear them — versus these other big multi-use stadiums where there’s usually a running track between the field and the fans. And they use that as an instrumental variable to measure how influential the crowd would be on the referees, for instance.

DUCKWORTH: I just want to say that, in psychology, evidence suggests that physical distance translates into psychological distance. So, this idea that, you know, if I’m sitting right next to you, it’s different from if I’m sitting three feet away from you, which is different from if I’m sitting six feet away from you. So, what you were saying there about this clever study and just the fact that there’s, like, a little running track in between the pitch of the soccer field —.

DUBNER: You’re buying it, in other words.


DUBNER: So, let me go back to ask you more about what you called “algorithm aversion.” I do see this paper here from Management Science by three authors — Berkeley Dietvorst, Joseph Simmons, and Cade Massey, who I believe is your colleague at Wharton.

DUCKWORTH: Yeah. Two of them are — Simmons and Massey.  

DUBNER: And this is called “Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can Even Slightly Modify Them.” You familiar with that paper?

DUCKWORTH: Yes, a bit. The beginning of this paper says — very briefly, because it’s so well accepted now that there is such a thing as algorithm aversion. — that in many, many instances, decision makers don’t use algorithms, opting instead to rely on human judgment, even when it’s pretty clear that the algorithms are better. So, the question then, practically speaking, is, like, what could we do to get people to use these algorithms, which really are pretty handy? And the very clever series of studies establishes the following conclusion: when people have a little bit of control over what the decision ultimately is, when it’s not just like, “Well, you said you were going to use the algorithm, and the algorithm says the person’s admitted, so we’re admitting them to college.”

DUBNER: So, you’re saying, like, if I can give a little bit of input to say, “Well, maybe we should also consider this a little bit more strongly.”

DUCKWORTH: Yeah. It’s kind of, like, the idea that, you know, people would be okay with having a self-driving car, or cruise control, right, as long as I know that I can put my hand on that steering wheel and I can change the direction of this car if I choose to. And so, this is, in a way, just, like, a practical engineering question. Like, how do we get human beings to use computers? And what they find over a series of studies, and the studies have a setup where, if you’re a participant, you are asked to forecast what a student’s scores are on a standardized math test, like the S.A.T. And you do this from a profile that you read of, you know, where they come from in the United States, how many times did they take the P.S.A.T. before they took this test, how many friends do they have that are going to college? So, you read this dossier, and you have a choice of whether you will rely on an algorithm. So, you know, “We’ve used lots of data sets, and we have come to this formula to kind of produce what we think the S.A.T. math score will be.” Or you can choose not to. And I think the clever thing about the study is they had different conditions where you had varying degrees of freedom to, you know, overrule the algorithm or not overrule it. The conclusion of the study is: if you have some control, almost any control, honestly — and that to me was the most surprising finding — like, okay, you can change the algorithm’s finding or results by, like, a teeny, teeny, 1 percent or whatever. That was enough to get people to use the algorithm and therefore be more accurate.

DUBNER: I think every human feels awkward and stressed when we feel we don’t have control over a situation, at least to some degree. I mean, I’ve spoken with airline pilots about this for years. There is a lot of automation in flying already. But people do not like the idea of getting onto a plane without a pilot, at least at this moment in time. 

DUCKWORTH: Yeah. I don’t like that idea, to be honest.

DUBNER: And, you know, I’ve heard the pro-automation argument from a lot of pilots, and other people associated with airlines — including, you know, my brother is a pilot. He’s not a commercial pilot, he used to be an Air Force pilot. So, I often think about —  when you’re describing this paper by Massey et al. — it strikes me as very sensible in that, if we can either participate in changing the algorithm, or even just think we’re participating in changing the algorithm, that I can see how we’d have a really different response to it. This gets to be an ethical dilemma. You know, everybody knows, I’m sure, the trolley problem, — this thought experiment where you’ve got a trolley that’s going to kill five people who are on a track, and you have the ability to switch it onto a track that kills only one person. So, is that the right thing to do, because you’re killing fewer? And people really struggle with this, because we don’t think about that mathematically. I hate to say it, I mean, I feel bad for the one person that’s going to get run over by the trolley, but I wish we would. I think about this in terms of autonomous vehicles. Now, autonomous vehicles are that invention that’s been just around the corner for, like, 15 years now. I talked to a bunch of people probably 10 years ago who swore that, as of now, we’d certainly do a lot of traveling in autonomous cars, and trucks, and things like that. And it hasn’t come to pass for a variety of reasons, including the fact that the science is harder on the edge cases than it might seem.

DUCKWORTH: The science is harder, meaning it’s hard to get the autonomous vehicle to handle all possible driving scenarios.

DUBNER: Yeah. It’s what they call “edge-cases.” It’s like, weather can interfere, construction can interfere, and then there’s all the other things like pedestrians, other drivers, and so on, and so forth.

DUCKWORTH: Yeah, the unexpected events that don’t fit the algorithm very well.

DUBNER: But, I also think about safety and transportation. So how many people do you think are killed by car crashes — vehicle crashes — globally, in a year?

DUCKWORTH: Oh my gosh. I wish Jason were here. He would know this figure. It’s got to be an alarming number.  

DUBNER: Name an alarming number.

DUCKWORTH: I think it’s got to be over a million.

DUBNER: Yeah. It’s about a million. About a million people a year — 180,000 children every year— are killed around the world from vehicle crashes, roughly 500 a day. Okay, so imagine this. Imagine we’re in a world where most cars are driven autonomously, and one of those cars gets hacked, or misprogrammed, or something, and it runs into a playground and runs over 10 kids, and they’re killed. What do you think the response would be? This is in a world where many of those 180,000 kids who are currently getting killed each year by crashes are no longer getting killed. But if one self-driving car runs over 10 cute little kids in a playground, what do you think happens?

DUCKWORTH: Well, here I have a very strong prediction, which is, “Okay, that’s the end of self-driving cars.” Like, we are already irrationally penalizing algorithms and robots for being imperfect when human beings are so much more imperfect.

DUBNER: Yeah. Kevin Kelly, who is — gosh, he does a lot of things. He’s a photographer and he is sort of a technologist. He helped create Wired magazine. He always talks about the fact that A.I. — and all technologies — they just really sneak up on us. It’s very rarely all or nothing.

DUCKWORTH: There’s, like, a gradual shift in how we depend on them.

DUBNER: Yeah. And so, I guess what I’m hoping is that we get over our algorithm aversion slowly, and gradually, but still fairly soon.

DUCKWORTH: I want to see that future happen, too, Stephen. And I actually did read a paper that I thought was so clever on this topic. This paper is from Telematics and Informatics, and it was published just last year. The title of the paper is “Who Made the Decisions: Human or Robot Umpires? The Effects of Anthropomorphism on Perceptions Toward Robot Umpires.” So, what this study is about is people’s evaluation of umpires in baseball and what they think about the decisions based on whether it’s a full-on algorithm, or full-on human, or what they call an “anthropomorphized umpire.” These are just online scenario studies. So, you ask people, like, watch this video of this play, and then here’s what the umpire ruled, how do you feel about that? Do you agree with that, et cetera? Here’s the humanized robot umpire — the anthropomorphized robot umpire — “‘Spark’ is a humanized robot umpire that is 1.7 meters tall and 11 months old.” This is like, if you ask Alexa what her birthday is, she’ll tell you. And I say “she,” even though Alexa’s an “it,” right? So, the idea of anthropomorphizing algorithms — artificial intelligence in general — is basically giving human qualities on purpose. And we do this, right? Like, you know, when I ask Siri to make a call, when I ask Alexa to order paper towels, I know I’m talking to a robot.

DUBNER: So what’s the result of this study, though? How did people judge their calls differently?

DUCKWORTH: Okay, so now I’m going to just read you from the abstract to give you the punchline of the study. “The results indicated that people perceived umpire calls as fairer and more credible and demonstrated greater trust in human umpires than in robot umpires. However, these negative effects were attenuated when robot umpires were humanized by giving them human-like characteristics.” The bottom line is that if you can make a computer look or seem like a human, we’re more willing to accept their judgments.

DUBNER: Like I always say, if you can fake authenticity, then you’ve got it made.

DUCKWORTH: Yes, like you always say. 

DUBNER: So Angela, let me ask you this. Can you point to an area in your life that you are eager to accept more automation or artificial intelligence?

DUCKWORTH: Stephen, I don’t know that I can point to an area in my life where I’m not eager to accept artificial intelligence. I have a robot assistant — I have a human assistant — but I have a robot assistant that handles, I would say, 90 percent of my online scheduling. So, if somebody emails me, I copy “Jamie Johnson,” which is the name that I have given to my robot, and Jamie Johnson will immediately reply — whether it’s three in the morning, or Saturday, or whenever, and say, “Angela is free at these following three times. Like, which of them work for you?” And this robot is so good, Stephen, that most people that I interact with don’t realize it’s a robot. In fact, we got cookies sent in the mail once for Jamie Johnson, and we all had a laugh, because, you know, Jamie Johnson doesn’t need cookies.

DUBNER: Now, what would you say if I told you, Angela, that I, Stephen, am in fact AI-1743 XT Squared Delta Stephen, which is the avatar that Stephen Dubner has programmed to have this conversation with you.

DUCKWORTH: Iinteresting. What would I say? I would ask you to sit exactly where you are while I come over to your apartment, and we’re both going to go on a little trip to a doctor who’s going to take care of you, and it’s going to be fine. Because when people say that they’re robots, it’s usually a symptom of psychosis. It’s a very common symptom of schizophrenia. 

DUBNER: Oh, ye of little faith. You were just saying that you have a personal robot assistant, Jamie Johnson, and yet you don’t think that someone like Jamie Johnson — maybe with a little bit better software — would be capable of having a conversation of the type that I’ve just had.

DUCKWORTH: I guess that’s what I’m saying, Stephen.  

*      *      * 

No Stupid Questions is produced by me, Rebecca Lee Douglas. And now, here is a fact-check of today’s conversation.

In the first half of the show, Stephen says that the strike zone extends from a batter’s armpits to their knees. According to the official MLB rules, it’s actually from the midpoint between the batter’s shoulders and the top of their pants to right below their kneecaps.

Later, Stephen mentions that MLB has considered a few different implementations of robot umpires — including the possibility of the machine announcing calls or a human umpire relaying calls fed to them by the machine. Another option under consideration is a replay review system that would allow each team’s manager to challenge a limited number of called balls and strikes during each game. As Stephen mentioned, a version of this system is currently in place for fouls, interference, and other calls such as missed bases.

Also, Angela and Stephen argue that sophisticated algorithms — or even primitive algorithms — are usually better than human judgment. However, we should note that algorithmic bias is a real concern. An algorithm is only as good as the data used to build it. For example, in 2015, Amazon realized that its AI hiring tool was biased against female applicants, because the model had been trained on resumes that mostly came from men. The system ingested and replicated the gender bias that already existed in the tech industry.

Finally, Stephen says that, in tennis, most out calls are made by the computerized system known as Hawk-Eye. In 2022, Hawk-Eye determined all line calls in the US Open and the Australian Open, but during Wimbledon it only weighed in on the calls that were challenged by players. And the French Open doesn’t use automation for line calls at all —it’s played on a clay court, so the ball leaves a mark where it lands.

That’s it for the fact-check.

Before we wrap today’s show, let’s hear your thoughts on our recent episode on how to break a habit of jaw clenching or teeth grinding. Here’s what you said:

LYNN CHEN: Hello. My name is Lynn Chen. I live in Los Angeles, California. And I just had to send a voice memo, because I recently cracked two dental guards in my sleep, because I grind my teeth. And I finally went to go get Botox in my masseter muscles a few weeks ago and am still waiting to see if this actually helps. But in the meantime, I was just so excited to find out that there are two more things that I have in common with Angela Duckworth. We’re both Asian Americans who have mothers named Theresa. We both love Diet Coke. And now, we both grind our teeth and have Botox in our jaws. Very exciting for me. Thanks for the show!

CARLY WYNN: Stephen and Angela, I have to tell you, I have a wild story about stopping teeth grinding. What I decided to do was a very unconventional, somewhat self-harm-inducing — so, I may not recommend this — approach. I took a little piece of, like, a flosser, and wrapped it around one of the wires in my removable retainer, and then positioned it just on the inside of my gum, such that when my mouth was relaxed, it was just resting there next to my gums and not poking me. And when I closed my teeth together, it poked me in the gum and woke me up. And through that, I stopped grinding my teeth. Just thought you guys might want to know since you asked how to stop teeth grinding. That’s how I did. Worked like a charm.  

LANDREY FAGAN: My name is Landrey Fagan. I’m a family physician out of Boulder, Colorado. I’m calling to say that my mouth guard has changed my life. I used to have severe bruxism with tension headaches leading to migraine headaches with scintillating scotomas, and my mouth guard that my dentist made for me has completely reversed all the symptoms. I really appreciate the advice and everything that you share, but Angela, you’re wrong. You should definitely, definitely consider a mouth guard once you’re done with Invisalign.

That was, respectively, Lynn Chen, Carly Wynn, and Landrey Fagan. Thanks so much to them and to who sent us their stories. And remember, we’d still love to hear about your thoughts on algorithms. In what area of your life are you eager to accept more automation? Or in what area are you particularly skeptical of it? Send a voice memo to Let us know your name and if you’d like to remain anonymous. You might hear your voice on the show!

Coming up next on No Stupid Questions: What’s going on with people who love to be scared?

DUBNER: Why are people into horror movies? I couldn’t for the life of me bring myself to watch one.

That’s next week on No Stupid Questions.

*      *      * 

No Stupid Questions is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, People I (Mostly) Admire, and Freakonomics, M.D. All our shows are produced by Stitcher and Renbud Radio. This episode was mixed by Eleanor Osborne. We had research help from Katherine Moncure. Our staff also includes Neal Carruth, Gabriel Roth, Greg Rippin, Julie Kanfer, Morgan Levey, Zack Lapinski, Ryan Kelley, Jasmin Klinger, Jeremy Johnston, Daria Klenert, Emma Tyrrell, Lyric Bowditch, Alina Kulman, and Elsa Hernandez. Our theme song is “And She Was” by Talking Heads — special thanks to David Byrne and Warner Chappell Music. If you’d like to listen to the show ad-free, subscribe to Stitcher Premium. You can follow us on Twitter @NSQ_Show and on Facebook @NSQShow. If you have a question for a future episode, please email it to To learn more, or to read episode transcripts, visit Thanks for listening!

DUBNER: Wait, are you saying you have both Alexa and Siri in your home? Don’t they compete?

DUCKWORTH: Well, no, because one’s called “Alexa” and the other’s called “Siri.”

DUBNER: You’re using the Amazon ecosystem and the Apple ecosystem.  

DUCKWORTH: I’m ambidextrous, it’s true.

Read full Transcript


  • Daniel Chen, senior researcher at the French National Centre for Scientific Research.
  • Berkeley J. Dietvorst, professor of marketing at the University of Chicago.
  • Daniel Kahneman, professor emeritus of psychology at public affairs at Princeton University.
  • Kevin Kelly, co-founder and senior maverick at Wired magazine.
  • Benjamin Lira Luttges, doctoral student at the University of Pennsylvania.
  • Cade Massey, professor of operations, information, and decisions at the University of Pennsylvania.
  • Toby Mowkowitz, professor of finance at Yale University.
  • Kelly Shue, professor of finance at Yale University.
  • Joseph P. Simmons, professor of applied statistics and operations, information, and decisions at the University of Pennsylvania.



Episode Video