Search the Site

Episode Transcript

My guest today, Nate Silver, is the founder of the data-driven website FiveThirtyEight. He’s also the author of the bestselling book The Signal and the Noise. Silver is best known for his shockingly accurate election predictions, but that’s just the tip of the iceberg.

SILVER: I think I have strength in dealing with imperfect information and dealing with uncertainty and kind of refining best guesses

Welcome to People I (Mostly) Admire, with Steve Levitt.

What I love most about Nate Silver is that he has such amazing instincts, both for analyzing and writing about data. Almost every time I read something he’s written, I have the same reaction: damn, I wish I’d written that.

*      *      *

LEVITT: So let’s start with the topic you’re most famous for, and that’s predicting election outcomes. In 2008, your first foray into political prediction, you correctly predicted 49 out of 50 states in the Electoral College. And then you, against all odds, did even better in 2012, getting every single state right. And those two election cycles led people to believe you were some kind of a messiah or an oracle, and I have to imagine that’s both a blessing and a curse, right?

SILVER: Very much so. I remember telling my literary agent that I am being set up to fail here. Inevitably, there’ll be a time when the low probability outcome comes up. Although in 2016, it wasn’t that low, actually. We had Trump with a 29 percent chance of winning on election day. But yeah, it led to this misunderstanding of what I do and what the data science behind election forecasting is. A lot of the time we wind up being less confident than the conventional wisdom about an outcome. That’s where the numbers point, right? Polling has been error prone in the past. It will be error prone again.

LEVITT: So you’ve been miraculous in 2008 and 2012 and the 2016 election comes along. It’s Hillary Clinton versus Donald Trump and every respectable prognosticator had Clinton favored, but you actually had Trump with a pretty good chance, 29 percent chance of winning. And afterwards, I heard a lot of people saying, “Oh, Nate Silver and FiveThirtyEight, they really blew it in 2016,” which is a completely predictable response, but I think absolutely the wrong takeaway. And I’m sure you must agree with me that there were much more useful conclusions to draw from 2016, a real opportunity to start understanding better the nature of predictions.

SILVER: A lot of people who are into political forecasting just want to hear that their guy is going to win. So you build up a large audience of progressive Democrats who think, “Oh, here’s this guy that always has good news for Obama and for Democrats,” and then when that’s not the case, it causes a lot of cognitive dissonance. So it kind of felt like in 2016, where our model did have Hillary Clinton favored, but less so than the conventional wisdom had her. So it was kind of like a thankless position where you’re like, “Yes, this thing that you think is unlikely is actually less unlikely than you think, but still below 50 percent.” That’s a hard message to convey in the heat of an election campaign. I am a guy who thinks about markets, right? Like, I play poker. I make sports bets sometimes. So to me, if the consensus odds was that Trump had like a — 15 or 18 percent chance was the odds at prediction markets, and if you’re at 29 percent, then you would actually make quite a lot of money on Trump. Now, why was he more likely than people assumed? It’s partly because of a mathematical property of the way the electoral college works, where people would look at these polls and say, “Well, Clinton’s ahead in Wisconsin, and she’s ahead in Michigan, and she’s ahead in Pennsylvania, and she’d have to lose all those states to lose, right? And what are the chances of that?” The issue is that all those states are correlated. They have the same, basically like white, working-class voter base, so when Trump does better than expected in Wisconsin, he’ll probably also do better in other midwestern states, Michigan, Ohio, Pennsylvania, and so forth. And so our model realized that these things are highly correlated, that being up a tiny bit in a lot of states is actually not all that good, because if it’s a uniform swing of the direction, then all of a sudden you lose all these states by a point, two points, instead of winning them by a point or two points. And that’s basically what happened, is that Clinton’s support was overestimated in the upper Midwest, and that’s a critical reach in the electoral college. And then you get Donald Trump as president.

LEVITT: So the point you just made is that it’s not that they’re 50 independent shocks, with each state getting some random draw. It’s that there’s a set of shocks, a small number, and they trickle across states. Is that why you think other people’s model gave a higher percentage chance to Clinton? Because essentially they were getting the standard errors wrong?

SILVER: That’s the main reason. The Huffington Post, for example, had a model that had Clinton with a 99 percent chance of winning. There was a model at Princeton that was like 99-point-some percent. If you remove the part of your model that says that these states are correlated and not independent, then you’ll get a really over-confident answer. There are some other subtle things, too. Our model had priced in the fact that in 2016, you had a big third party vote. You had a lot of undecided voters, so there were more votes up for grabs than typically. A lot of people who say they’re going to vote third party actually wind up, under the pressure of the ballot booth, picking one of the two major parties. But the main thing was just that you cannot treat this as 50 independent contests. It’s the same two people on the ballot in every state. And I grew up in Michigan. People in Michigan are not that different than people from Wisconsin or Ohio, despite rooting for different football teams and so forth.

LEVITT: Yeah, exactly. You got back in form in 2020, right? You got 48 out of 50 states. And I have to say, I was surprised to see, reading your new Substack account, that you’re not sure whether you’ll even cover the 2024 election. And I’ll believe that when I see it, because the demand for your forecast is going to be intense.

SILVER: The issue is that people look at me as some avatar for — I don’t even know what anymore, right? But there’s a lot of pressure to convey information to people that are not in a mood for rigorous analysis necessarily at all. You have people who feel very strongly emotionally about this election. But I think people have trouble grasping the idea that an election is one event drawn from a larger sequence — a reference class, is the nerdy way to put it — and parties don’t want you to believe that elections are probabilistic. They want you to think that our guy is the righteous guy to inhabit the White House and that you as the voter control this process. But yeah, it’s a little bit of oil and water as far as what the audience wants versus what a probabilistic forecast can really provide.

LEVITT: I don’t think I’d be exaggerating if I said that you are the number one celebrity data scientist in America — that if we polled a representative group of Americans and we asked them to name a data scientist, your name would come up more than any other. And I love that for two reasons. The first is because I have great admiration for what you do with data. And the second reason is that when it comes to data science, I think you’re essentially self taught. You don’t have any fancy credentials like a Ph.D. You didn’t even major in the right subject in college. You’re an economics major in college, whereas the kind of people who get hired as data scientists at fancy tech firms, they tend to be statisticians or computer scientists by training. And I’ve always argued that the most important determinant of a great data scientist isn’t knowing lots of complicated techniques. It’s having common sense and curiosity, a knack for asking good questions, and the ability to tell a good story with data. Your success, I think, should be an inspiration to every budding data scientist who fits that bill. So that’s my explanation for your success. What do you think the secret is to your success?

SILVER: I mean, it still is a little bizarre. First of all, let me say one thing. I do think actually the fact that I was an economics graduate at University of Chicago, by the way, is worth pointing out because I think economists are good at framing questions that can be answered rigorously, ideally with data.

LEVITT: Yeah, I think of economics as essentially applied common sense. That’s why it’s a good precursor to being a data scientist.

SILVER: I also have a lot of hands-on experience in weird ways, from playing poker, from building fancy models for fantasy baseball, and things like that. It’s weird to be someone who’s not terribly quote unquote “political.” You know, to be very caught up in elections and then people are making inferences about your political preferences based on what your forecast says — that’s been a little bit of a weird journey. I think being in the right place at the right time, too. I mean, like, interest in American elections increased vastly beginning in 2008 with the rise of Barack Obama. We certainly have had a series of very close and dramatic and interesting elections, right? Any country that can elect both Obama and Donald Trump back to back is a complicated country. It’s been interesting having a front row seat at this very confusing time, in some ways, to be an American.

LEVITT: I want to go back to you as a data scientist, though, because if you were going to be a physicist or a chemist, there’s no way that you could be a wildly successful physicist or chemist by being a poker player and building some models. And it’s interesting to me, fascinating to me, that somebody like you — as far as I can tell, you just were interested in questions and you would gather data and you’d try to make sense of the world. And you just did that over and over and got better. Being fair, I would say I’m a self-taught data science, too. But is your identity as a data scientist? Or how do you think of yourself, even?

SILVER: It depends on what I’m doing. I don’t have the kind of engineer physicist brain, right? I think I have strength in dealing with imperfect information and dealing with uncertainty and kind of refining best guesses. I think of the world in provisional and probabilistic terms. I literally was a professional poker player for a couple of years in my mid twenties. One disadvantage of academia is that it’s a little bit slow moving, potentially. It takes a long time to publish papers, where if you’re publishing an article — formerly at FiveThirtyEight, now at my Substack called Silver Bulletin — you’re trying to do 80 percent as good a job in like 10 percent or 2 percent as much time. And believe me, I think there is value in, like, rigorousness. But having a lot of reps for being able to see a data set quickly and get the gist of it with some degree of uncertainty, more often than not is, I think, a valuable skill set.

LEVITT: It’s also more fun. That’s the thing that saps the fun for me out of academics. I love coming up with ideas. I love finding a pile of data and making sense of it and getting the basic idea, and that takes maybe 10 percent of the entire time of doing an academic paper. And about 50 percent of the time of doing an academic paper is addressing 74 different possible criticisms, which you know can’t matter, but that somebody might say. And so in order to get published, you need to essentially rule out or at least discuss or know about each of those. And that’s just not that much fun for me compared to the quick hit version. Sometimes you’ll be wrong even when you do the really rigorous thing because you don’t see the whole story, but the trade off — both I think in terms of getting the answers and just in terms of pure joy of doing the research — is much higher in a blog world than in an academic world.

SILVER: And it’s also — I don’t mean to criticize academic writing — but it is a little bit of a straitjacket. There is some art as far as, like, conveying what is the degree of epistemic certainty I have about this conclusion, right? Is this kind of like a conjecture written in dry erase marker, or is it something I’m going to gravel in limestone because I’m so confident of it. In academic writing, it often always sounds like the latter, even when you prefer the former.

LEVITT: The problem comes down to the nature of the adversarial relationship between the author and the anonymous referees, whose job it is to say that a paper should or shouldn’t be published. And almost never will a referee say, “Hey, this kind of seems mostly right. You should publish it.” If it isn’t a thousand percent right, then they say don’t publish it. And the consequence is that in academics — what I ended up doing was biting off tiny little problems that I knew I could answer and avoiding going after big questions where there might be question marks left over even after I did the best job I could.

SILVER: And yet we still have a replication crisis. But yeah, it leads to these kind of very precise answers that sometimes aren’t the most accurate answers, is the way to put it, right? You’re answering a very particular question with particular conditions or specifications. There are so many ways that as a researcher you can make different choices with how you handle a data set. I mean I don’t think it’s bad, but it can put you in a very potentially defensive mentality.

LEVITT: I actually changed the way I wrote papers after a while, with exactly this spirit in mind. Usually what economists do, which is crazy scientifically, is they come up with a theory, they go to the data, they test it, and inevitably, the data don’t match the theory. So then what the economist does is say, “Okay, I need a different theory that matches the data.” And they come up with a different theory. And then the way they write it up is, “Here’s the theory, and it is validated by the data.” It is so unscientific, it’s almost unimaginable. But that is the form most academic papers take. But I decided to do something very different. What I began to do later in my career was to say, “Hey, here’s a data set, I’m just going to describe what’s in the data. These are the correlations in the data. And correlations aren’t what we care about, we care about causality, but what I see are correlations.” And I would try to come up with every theory I could think of, and then I’d say, “Which of those theories are or are not consistent with the data?” And then what I’d try to do is say, “Well, is there anything else in the data that I hadn’t thought of before that might distinguish these theories and then go back and try to add it?” And it’s a much more agnostic way, and I think a much more scientific way, of doing empirical analysis, but it’s hard to get papers published that way because it doesn’t have this veneer of the quote “scientific method” where you go out and you have a hypothesis and you test it and you show that the data are consistent with it.

SILVER: And we’ve also evolved to a place politically in the U.S. where the kind of phrase, “Oh, trust the data,” or “trust the science,” or “trust the expertise” has become a little bit loaded. And if you know anything about science, then it is kind of a somewhat adversarial process, right? And a lot of facts are seen as provisional and are subject to change and are subject to scrutiny, certainly, and skepticism. And where are we in that balance between, like, healthy and unhealthy skepticism? I don’t know, But if you’re in a paradigm where admitting doubt is seen as weakening your argument, then that’s not a very scientific way of thinking, I don’t think.

We’ll be right back with more of my conversation with Nate Silver after this short break.

*      *      *

LEVITT: Let’s take an example of something you’ve done recently, which I think is fascinating from the perspective of this conversation we’re having. It’s in your Substack, the Silver Bulletin. And it’s about Covid-19. And I thought the results themselves were really interesting, but what I especially liked is the way you talked about the results. Could you just lay out the question you were trying to shed light on about Covid, your empirical strategy, and your findings?

SILVER: It was a Friday afternoon and kind of what inspires me to write particular posts I’m never quite certain of. But what I thought was a relatively straightforward finding — that until the introduction of vaccines, so early 2021, basically there was no relationship to speak of between the political orientation of a state and how many people were dying from Covid. So you had, for example, some blue states like New York, New Jersey, Massachusetts that had very high death rates. You also had some red states, Arizona, the Dakotas, and whatnot, that have very high death rates from Covid. Not much of a correlation. And then, once you can get vaccinated, you see pretty strong correlations. The top of the list of Covid deaths is almost all red states. The bottom of the list is almost all blue states. Not perfect, but having looked at lots of data sets when it comes to American politics, you know when you see the red states and the blue states lined up in a particular way. And the reason here is not because, like, Covid targeted Republicans, but because Republicans were quite a bit less likely to get vaccinated.

LEVITT: What I like about what you do is it’s often very simple, easy to explain, and plausible. By comparing the before-vaccine time to the after-the-vaccine time, you’re trying to essentially create something like a control group. Something changed and that’s the introduction of the vaccine, and according to the data, it very differentially affected states that were heavily Republican versus heavily Democrat. And these are big magnitudes. Do you have a sense of how many extra deaths you might be talking about in Republican states because of less vaccination, if the story is true?

SILVER: So if you just lump all red states and all blue states together, meaning based on how they voted in 2020 and 2016, then the red states are about 35 percent higher — 

LEVITT: Higher in death rate.

SILVER: Yeah, but the very red states, like West Virginia, it’s a larger gap than that potentially. But yeah, if you just put them into groups, then about 35 percent higher.

LEVITT: So I’ll tell you what I thought was interesting about this is: the real question isn’t about politics, the real question about does a vaccine work, but it’s not that easy to figure out in real-world data whether the vaccine works because of selection and who gets it and whatnot. And this is an interesting — maybe not cut-and-dry, maybe not completely overwhelmingly convincing — but an interesting case of thinking about data creatively to try to get at causality in a world where it’s not that easy to get at causality. And there are other ways to do it. And what I also liked about your Substack is you did this simple aggregate analysis, and then you refer it to a fascinating study that had actually gone into registration records to do the same thing. Could you talk about that study as well? Because I thought that was really interesting. I hadn’t seen that study before.

SILVER: Yeah, this was a Yale study. I think it was a collaboration between the management school and public health school. And they actually are going to the individual level data. They’re looking up voter registration records in Ohio and Florida and running a search of people who died during that period. And they found the same thing — that up until January or February 2021, neither party’s registered voters have higher excess deaths. And then once vaccines are available, then Republicans do. And the good thing about this is, A, they have individual level records, B, because they’re confining this analysis to individual states, Florida and Ohio, then there’s less, like, regional luck of the draw and where Covid kind of happens to land or where a particular strain might have more effect. So they’re controlling for a lot of the things that my kind of quick and dirty analysis didn’t do and find the same thing. And that’s, again, as a researcher, when you start to say, okay, here are two pretty different methods and they have a similar result, one’s more involved, one’s simple. That starts to be pretty robust more often than not.

LEVITT: The other thing that I found really interesting, and it’s a little bit behind the scenes, is the fact that there wasn’t a difference between the red and blue states before the vaccine. And presumably if the Republicans didn’t like the vaccines, they also didn’t like a lot of the other policies we were doing to try to fight Covid, like restricting social contact. The implication is that maybe those other policies weren’t working very well, which is interesting because we just don’t talk about that as much as we should, thinking about future epidemics and what maybe we should be doing.

SILVER: What I think it does is put an upper bound on how effective those other measures were in practice. A non-finding doesn’t mean the effect was zero — it means the effect was uncertain — is a subtle point that people miss sometimes.

LEVITT: Okay, so you did this Substack post, and it said things you thought were pretty sensible and well supported by the data, and so the critics come. So tell me about that and how you think about criticism in this context.

SILVER: This seemed like a very kind of moderate and sensible position to me. “Hey, the vaccine’s made a clear, obvious difference. And then there’s stuff that’s not so obvious, and so maybe it wasn’t worth it.” That seems like a very centrist kind of take, but instead it gives people two different ways to get pissed off at you, right?

LEVITT: Here’s what I found entertaining as an outsider watching you do this, but I find incredibly frustrating when I’m the one being attacked, is that there’s a real imbalance between the amount of work it takes to produce a thoughtful data-driven piece that makes a sensible point about the world, but then to criticize empirical work takes no time at all, right? That drives me crazy, that mismatch between how hard it is to produce and how much people can sway readers just by criticizing you without any support.

SILVER: A big culprit in this is also Twitter, or I guess it’s now called X officially. The fact that you can take a position and write a pithy tweet in a minute or 20 seconds I think makes this issue worse and leads to a lot more tribal rivalry and kind of dunking on people. I have kind of quite self consciously pivoted away from Twitter toward Substack, toward my newsletter there, for many reasons. One is that you can get subscribers, including paying subscribers. And so at least if you’re the subject of some annoying controversy, you can make a little bit of money off it now. But also to be able to control the tone and say, “Look, I’m gonna take four hours with this subject and not four minutes.”

LEVITT: So I want to go back to the book you wrote. It’s called The Signal and the Noise. And knowing we’d be talking, I went back and I took a look at it for the first time since right after it came out. And I have to say it’s really an awesome book. You make a lot of simple but I think important points. So to me, the most succinct summary of the book is this. You wrote: “We need to stop and admit it: we have a prediction problem. We love to predict things, and we aren’t very good at it.”

SILVER: Yeah, that’s the thesis of the book, basically. So go out and buy it! I think the irony — it comes up a little bit with the Trump in 2016 prediction, I’d call it a forecast, technically — but like people really demand certainty, right? They assume that if someone’s an expert, they must know all the answers with a high degree of confidence, when there are times, like in 2016, where the right answer is: be less certain. That’s a kind of hard message to sell, and it’s not getting any easier in the kind of days of Twitter and other social media, where people have access to a stylized interpretation of facts that flatter their political and other preferences. But human beings in so many domains have failed, including economics, right? Economics is notorious for challenges in predicting macroeconomic conditions. Problems that we thought were solved, like inflation, obviously weren’t in the past couple of years. So there really aren’t very many examples of successful predictions. Exceptions include weather forecasting. Twenty-five or 30 years ago, weather forecasting was literally a joke, but also there was very little predictive power more than a couple of days out. And now they can precisely say, next Tuesday at 3 p. m., 80 percent chance of rain. It’s quite useful. So what makes weather forecasters good are a couple of things. One is they do actually have physical models of the world. It’s not just purely statistical. That helps a lot. And also they have a lot of practice, where if you make forecasts every day, 24 hours a day, of temperature and wind and pressure and all these other variables, then experience really helps. You get a lot better calibrated if you get a lot of feedback knowing when you’re right and when you’re wrong.

LEVITT: So you say we’re bad at predictions, and then you get into what you think the main reasons are that we fail. One reason you raise is that we focus on those signals that tell the story about the world as we would like it to be, not how it really is.

SILVER: This is especially true if you cover politics for a living and cover elections and polling. I can guarantee you that like, in October, 2024, you’ll have Republicans making these grand claims that Trump is going to win based on the data and Democrats are saying the same thing about Biden and it’s kind of funny how people like just don’t have an awareness of, like, how much confirmation bias they have. It’s like they might intellectually understand in some abstract way that like, confirmation bias exists, but partisan political preferences train you to see everything in a blinkered, partisan way. There’s no particular reason that your view on marginal tax rate should correlate with your view on abortion, for example, or like transgender rights or something like that. But parties try to get people to form coalitions by agreeing on a bunch of unrelated stuff. And it’s almost like a recipe toward confirmation bias. I think about the game theory of politics a little bit, right? It’s not a coincidence that most presidential elections are about 50-50. The parties are very efficient in some ways at forming coalitions. But that means they’re taking complicated human affairs and complicated people and voters and smooshing them all down into one dimension. And so that’s a recipe for being yelled at if you have heretical, complicated political views.

You’re listening to People I (Mostly) Admire with Steve Levitt and his conversation with Nate Silver. After this short break, they’ll return to talk about how online poker led Nate to the world of political forecasting.

*      *      *

Disney acquired Nate’s website FiveThirtyEight in 2018, but they gutted his team there earlier this year as part of broader cost cutting efforts. I’m much more interested in how he grew FiveThirtyEight to be what it was than I am the Disney fallout. But I guess I need to ask him about both.

LEVITT: So I’ve been focusing on Nate Silver, the data scientist, but you’ve also been pretty entrepreneurial. What was the origin of FiveThirtyEight? Do I remember correctly that you were doing some kind of an anonymous political blog or something like that?

SILVER: So to give my kind of very brief life history, I went to the University of Chicago, graduated in 2000, got a job working at —

LEVITT: Wait, let me ask you. How come you didn’t take my class?

SILVER: I think your classes were, like, not at the right time of day. I was lazy as a student. So I had like four-day weekends every weekend, not the most productive period of my college career, I will tell you that much. But yeah, I graduated, I got a job as a consultant for KPMG, which is a big accounting firm, found it pretty boring, but on the side began doing a couple things. One is that I started playing internet poker, which was in a boom period back then. And also started working on models to forecast how Major League Baseball players would do. I basically had a lot of free time at work and wound up quitting to do a combination of those things. I was actually making most of the money playing poker, but putting more of the time in working on baseball statistics called — sabermetrics is a more technical term for it. Did those things for about three years. In 2006, the U.S. Congress passed a law that essentially banned internet poker. What it really did was kind of ban payment processing to online poker sites, so there were workarounds, but a lot of the liquidity in the market dried up. So having my livelihood destroyed by this act of Congress, I began actually in 2006 becoming more interested in the congressional elections that year. I wanted to see the people who had passed this legislation voted out of office.

LEVITT: Oh, vindictive, huh?

SILVER: Jim Leach was this kind of very old school, moderate congressman from Iowa who was a lead sponsor of the bill. And he actually did lose to a random political science professor who was backed by poker players’ contributions actually. Dave Loebsack, I think the name was. I was living in Chicago at the time, and it was hard not to be compelled by the 2008 election. You had all these megastars — you had Hillary Clinton running, you had Barack Obama running, who had been a law professor at U of C and kind of a favorite of, like, campus progressives of course. But also John McCain was remarkable American war hero, and Sarah Palin was a phenomenon that was a precursor in some ways to some type of Trump-style populism. And so all that star power — coupled with this increasing interest among the population for Moneyball-style this, Freakonomics-style that, right, more kind of data-driven analysis — I started writing, originally anonymously on this site called Daily Kos, which is a actually very liberal-leaning site.

LEVITT: And why anonymous?

SILVER: I was afraid since I was mostly known as a baseball writer that people would not want to hear my political analysis.

LEVITT: Even though you had a reputation for being a really good data scientist in this other domain, you thought it was better to be an unknown. That’s funny.

SILVER: And also to have people evaluate the work for itself and not having my name attached to it. But sooner or later, I realized, okay, it doesn’t make much sense to be, like, writing for someone else’s site. So I founded FiveThirtyEight and began building these models initially to see whether Obama or Hillary Clinton had a better chance against McCain, also some forecasting of like the primaries. And the site just did way better than I would have thought. This was still an era where it’s the beginning of Twitter, things can go very viral very fast, and all of a sudden you’re getting more page views than some mid-sized newspaper. And so it just wound up in a mostly good way taking over my life. And there was just like tremendous demand for the intersection between interest in politics, interest in data, and just the kind of viral phenomenon unto itself.

LEVITT: So this was a one-man show for how long? Or did you start hiring employees right away? Or how’d that go?

SILVER: I’ve always and still always am the guy who builds all the models himself, right? Other people might write for the site. There were several paid guest writers and some unpaid guest writers, but like, I always was very hands-on as far as the models go.

LEVITT: So just looking at my own academic career, as I got resources and started hiring people, the only thing I really found that I could delegate very well was data analysis and modeling. And that was the part I liked best. And I made this terrible career move of giving away the part of the process I liked best, but I didn’t feel like I could have people write my academic papers. And I didn’t feel like other people could have ideas that I wanted to write about. It’s interesting to me that you’re saying you managed to avoid that fate at FiveThirtyEight. You really kept control of the modeling, which I’m impressed by.

SILVER: Well, I should say there were lots of people that were very helpful as far as collecting and updating data, as far as like building these beautiful graphic interfaces and visualizations. But the actual kind of data itself — I still code the election models in Stata, and it is very labor intensive. Right now, all the models are in a state where I think they’re all pretty complete. I don’t plan to make major revisions, but yeah, there were periods where over the period of, like, a year and a half, rebuilt our presidential election model, our presidential primaries model, the congressional model, and an N.B.A. model, and that was just like a lot of very late nights, trying to debug some code.

LEVITT: Eventually, you partnered with The New York Times, and you did that for three years. And I think there’s this general impression that when you’re creating content for The New York Times, it must mean you’re getting paid a bunch of money. But at least for me, and when Stephen Dubner and I had a regular column in The New York Times Magazine, and then we had our Freakonomics blog affiliated with The New York Times, my memory is that it wasn’t lucrative at all. And in fact, I think I remember trying to talk you out of partnering with The New York Times. Is that true? Or is that just my imagination?

SILVER: Journalism is a hard field to make money. So they paid me decently well, but it is nice now being on Substack where you write a good post, and you get dozens of notifications in your inbox saying people have signed up. And if it’s a paid subscription, there’s a pretty substantial lifetime value from a subscriber. And yeah, it is very weird now kind of having this very direct incentive to write good stuff, or write stuff that pleases people, which might not be quite the same thing, after years of having — at The New York Times and at ABC News — of having no incentive-based compensation at all.

LEVITT: So you jump over to ABC And then earlier this year, the nightmare scenario unfolded with Disney doing a bunch of cost-cutting throughout the organization and they savaged FiveThirtyEight.

SILVER: Yeah.

LEVITT: What’s your reaction to that?

SILVER: To give, like, the slightly longer history, FiveThirtyEight was acquired by ESPN in 2013, which is part of Disney, and then transferred to ABC News, another part of Disney, in 2018. So one thing I’d say is it wasn’t a big surprise. Look, when ESPN bought FiveThirtyEight, this was an era when ESPN thought they were the best business in the world, that we have these guaranteed subscriber fees and we show the N.F.L. and the N.B.A., which is incredibly robust products, and our business will not be disrupted, right? We have a huge profit margin. And so actually with businesses like FiveThirtyEight — they had no, like, business model at all. They were purely looked at as loss leaders. Even though they, I think, could be very good businesses, right? I mean, they have very loyal audiences. They have very, frankly, affluent audiences. But if you don’t start running something like a business to begin with, then it’s not in the DNA. You literally have nobody whose job it is to really sell ads, for example, or find other ways of monetizing the site, like subscriptions. It’s nobody’s job and so it doesn’t happen. And as Disney hits more headwinds with people cutting the cord, and theme parks and the pandemic and every part of the business now is under threat in some way, shape, or form, you just become a sitting duck at some point in time.

LEVITT: Let’s talk about your new book. You’ve been at it for a long time. I can see talk about it online going back at least to 2021. This must be some book you’re working on.

SILVER: Yeah, so the subject of the book is gambling and risk, which is an ambitious subject. It starts out literally in the world of capital G gambling, so the first two chapters are about poker, and there’s a chapter about the history of Las Vegas, the history of casino gambling, there’s a chapter on sports betting. That’s the first half. Then there’s chapters on venture capital and the cryptocurrency bubble and collapse. There’s a chapter at the end about economic progress and capitalism. There’s actually a lot of economics in the book, I think, in different ways. So it’s a very ambitious book that I hope will provide interest on every page.

LEVITT: You have built a life around analyzing data. And what I find so shocking in the modern world is how little training and exposure the typical person gets in a school setting to data-related things. Have you thought at all about the teaching of data science or data analysis and how we might do it to middle school kids or high school kids?

SILVER: First of all, I think there should be statistics and probability and kind of logical thinking classes taught from a relatively early age, I would say. And then for some reason, still with math education in the U.S., there’s still sometimes too much of an emphasis on the technical side of things, and not as much on, like, problem-solving, logical quote unquote “rational thinking” skills. One thing I will do — sometimes I’ll be asked to judge student research paper competitions, so they’re trying to solve some sports problem or some election modeling problem, and almost invariably the people use way too many fancy techniques and aren’t spending enough time asking basic questions of the data, or thinking about confounding variables, or figuring out like what a more robust strategy is for answering a question — all the things you were talking about before. I haven’t thought about what the curriculum would be, but a combination of statistics, but really logical thinking I think would benefit the students of the United States.

My advice, if you want to learn how to analyze data yourself, is to find a question you care about, get your hands on some data, and try to figure out the answers. There is no substitute for that kind of real world experience. The second best thing you can do to learn about analyzing data, short of doing it yourself, is to read what Nate Silver has to say, either in his outstanding book The Signal and the Noise, or in his new Substack, entitled Silver Bulletin.

LEVITT: So now it’s the time in the show where we take a listener question and, as always, we bring on our producer, Morgan.

LEVEY: Hi Steve, a listener named Cletus wrote to the show. Cletus has been a high school educator for almost 20 years and says that getting kids to appreciate a subject is an important part of the learning process. In our episode with mathematician Stephen Strogatz, the two of you developed the idea of a math appreciation course for high school students, and that idea has gotten a lot of traction after that episode. Cletus likes the idea, but does have a concern. Students who might really appreciate math might not excel at the subject. For instance, Cletus has seen some students pursue careers in video game design because they love video games. But when faced with the more challenging and boring realities of design and development, they’ve dropped out or failed. So, what do you think about this?

LEVITT: So there is no guarantee that just because you like something, you can build a career around creating the thing that you’re excited about. If we were able to create a math appreciation course that led a whole bunch of kids to think they wanted to do STEM, to try it, and then decide they didn’t, I still would call it a huge success, I think such a better outcome than those kids never dipping their toe in the water because they were just afraid of it. I’m a big believer in options and I’m a big believer in quitting. So I think there’s not that much lost if people try something and it turns out not to be their thing.

LEVEY: So Cletus had another example which I thought was really spot on. A lot of people loved your insights into economics after Freakonomics came out, but that doesn’t mean they’d actually enjoy the process of wading through data to find these insights and writing academic papers. Does that ring true for you?

LEVITT: Oh, completely. After we wrote Freakonomics, I was a little nervous, really in the same spirit of what Cletus is talking about. Because I don’t think I’m exaggerating if I say that since the book was written, maybe a thousand young people have come up to me and they’ve said, “Oh, I read Freakonomics and that’s the reason I’m doing economics now.” Initially, I would apologize and say, “Oh, God, I bet you hate real economics because Freakonomics and real economics don’t have that much in common.” And interestingly, not one young person ever said to me, “Yeah, I’m actually really angry that you tricked me into doing economics.” Some of them said, “No, I love economics.” And others said, “Oh, yeah, God, economics is hard. I finally had to quit.” But not a single person blamed me for opening that door. After a while, I just stopped apologizing for it and I gave a different response, which was just, “I’m so glad that I was able to have that small impact on your life.” And then I would say, “How do you like real economics?” And sometimes they’d say they liked it, sometimes they wouldn’t. But it really reinforced for me the idea that it’s really not the job of an introduction to a field to tell you what’s life like as a professional doing it. Really, the job is just to say, “Here are the amazing lessons that brilliant people over the last hundred years or last thousand years have come up with, and rejoice in it, feel the wonder of what’s been discovered.” This conflation between how we teach people at the entry level, and a real career in something — they’re totally different. We talked about this with Carolyn Bertozzi and chemistry. The chemistry you learn in high school, it’s chemistry from the 1800s. It’s got nothing to do with the daily life of a modern chemist. It’s just the nature of introducing people to a subject. You give them the greatest hits. What Steve Strogatz’s point in math appreciation is: if a kid decides that they can’t do math, that math is not for them, then they’ll never even try a path that could possibly involve math. And that’s what we’re trying to avoid.

LEVEY: Cletus, thank you so much for writing in. If you have a question or comment for us, our email is PIMA@Freakonomics.com. That’s P-I-M-A@Freakonomics.com. It’s an acronym for our show. We read every email that’s sent. We look forward to reading yours.

In two weeks we’re back with a brand new episode featuring Fei-Fei Li, a computer scientist whose pioneering work in computer vision played a critical role in the development of artificial intelligence. Now she spends much of her time trying to bring humanity to A.I., steering its development in ways that will benefit rather than harm society.

LI: Every tool is a double edged sword. Just because I wish this is benevolent doesn’t mean it will always remain benevolent.

As always, thanks for listening. And we’ll see you back soon.

*      *      *

People I (Mostly) Admire is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and The Economics of Everyday Things. All our shows are produced by Stitcher and Renbud Radio. This episode was produced by Julie Kanfer with help from Lyric Bowditch, and mixed by Jasmin Klinger. We had research assistance from Daniel Moritz-Rabson. Our theme music was composed by Luis Guerra. We can be reached at pima@freakonomics.com, that’s P-I-M-A@freakonomics.com. Thanks for listening.

SILVER: I’d rather kind of gouge my eyeballs out than have those conversations anymore. 

Read full Transcript

Sources

  • Nate Silver, founder of FiveThirtyEight and author of the Silver Bulletin.

Resources

Extras

Episode Video

Comments