Search the Site

Episode Transcript

JENA: So, let me just ask you: do you think that hospitalizations for hyperglycemia go up among children with diabetes after Halloween? 

LEVITT: Yes.

OSTER: Yes. 

JENA: No, they don’t! Because if they did, I would have written a paper about it. 

That’s me talking to two economists whom I “mostly” admire: Steve Levitt, University of Chicago professor and co-author of Freakonomics, and Emily Oster, Brown University professor and author of parenting books like The Family Firm. 

You’ve probably read or heard about lots of their studies, but I asked them to sit down with me because I wanted to know about the studies you haven’t heard about. 

You see, the nature of research is to come up with ideas, to ask questions … and if you’re one of the lucky ones, you get to ask really creative, important questions — and then design studies to answer them. But coming up with a good idea, designing a good study, and coming up with a good answer is a lot like playing the lottery — sometimes you strike big, as Steve and Emily have done a lot of times, but most of the time you lose. And just like in the lottery, the news doesn’t feature the losers.

In fact, most of our ideas end up in what we call the proverbial “file drawer,” that’s a term that was coined in the 1970s by the psychologist Robert Rosenthal.  

Maybe the idea — the question you wanted to explore — just wasn’t that good to begin with. Maybe the idea was a good one, but there was just no good way or good data to study it. Maybe another, better question pulled you in a different direction. Or maybe there was an answer to your good question — and that answer was simply: no. There was no impact; there was no change in outcome; there was no significant difference. 

Now, I’m a big fan of good ideas but I’m not such a big fan of the file drawer, because it’s basically where some of my favorite ideas go to die! Ideas that, you know, even if they didn’t work out, they might still make you say, “Hmmm, that’s kind of interesting.” Which has led me to wonder: is there something we or I should be doing with all those scrapped ideas?

From the Freakonomics Radio Network, this is Freakonomics, M.D.

*      *      *

I’m Bapu Jena. I’m a medical doctor and an economist. And this is a show where I dissect fascinating questions at the sweet spot between health and economics. At the heart of each of those questions was a spark, an idea that set researchers on a path of discovery. 

Today, Emily Oster, Steve Levitt, and I talk about that path and how we approach the often-daunting task of asking questions for a living — and if there’s something that we should be doing with those questions that are lying around in the back of our file drawers.

LEVITT: Somebody approached me, and they asked me if I could study texting and driving and what the effect was. 

Yup, believe it or not, that’s in a file drawer! 

Before we get to that though, I wanted to know a little about how Steve and Emily became economists. As for me, I didn’t actually plan to become an economist. Almost 20 years ago, when I was interviewing for a dual-degree program at the University of Chicago — I was trying to get my medical degree and a Ph.D. in biology — the director of the program asked if I wanted to try for a Ph.D. in economics instead. I happened to have studied econ in college, so he thought I’d be a good fit for that program. I was impressionable, maybe a little crazy, and before I knew it, I was taking graduate courses in economics while going to medical school at the same time. So, for me, becoming an economist was all about chance, which is a feature of life that has actually influenced a lot of my work. Chances are if you’re listening to this though, you know who Steve Levitt is — he’s a world-class economist — but how did he become one?

LEVITT: So, I didn’t give any thought to anything. I was at Harvard and in my freshman year, I had a rule that I just took the biggest, easiest courses on campus. And I figured I believed in markets even before I was an economist. The introductory economics course was the biggest course on campus. So, I had to take it. And I remember I had this interesting moment, maybe a month into the course where they were teaching about the concept of comparative advantage. It’s something that was so intrinsic to me that I thought, “How can they teach this?” As I walked out of the class, my best friend, who was also in the same class said, “Oh, my God, that was the most complicated thing I’ve ever seen. I don’t think I could ever learn that.” And I thought, “Woah, wait a second, maybe I should do economics? I’m good at it.” And that was literally the reason I did economics. 

JENA: Emily, how about you? Tell us about why you got into economics.

OSTER: Yeah. So, my parents are economists, both of them. And so, I knew what comparative advantage was at 5. But I actually was definitely not planning to be an economist. When I entered college, I was going to do, like, bench science. And the summer after my freshman year, I had two jobs. One of them was working in a fruit fly lab, where I was responsible for dissecting the fruit fly larvae brains. And I don’t remember what I did with them after that, but that’s what I did. And then the second job was, I was a research assistant for Chris Avery, who’s an economist at the Kennedy School working on some projects about education. And at the end of the summer, I decided to be an economist. How much of that was the love of the stuff I did with Chris and how much of it was the tremendous dislike of the fruit flies? We will never know. But that was the sort of ultimate dividing line.

Steve teaches at the University of Chicago, and Emily at Brown University. They’ve both tackled some of the most creative questions that I have seen. For example: Steve has studied the link between legalized abortions and crime; he’s looked at corruption among sumo wrestlers. Emily has studied the impact of cable T.V. on the status of women in India and she’s helped us understand — by looking at people with Huntington’s disease — how expectations about our future health impact the educational and health investments we make in ourselves today. Emily has also written a lot about the economics of parenting. And during the pandemic, she became the de facto face of school reopenings, building a national database and finding that the risk of Covid-19 infection among students has been low.

So, these are two incredibly creative people. And I wanted to know how they come up with ideas. Was this something they were born with? Or did they learn it?

OSTER: I think that there’s an inherent piece and a taught piece. When you are doing economics or probably any field, the hardest thing in graduate school is the part where you go from, you know, doing problem sets that someone else has set for you to having to look at a blank piece of paper and come up with an idea. Some people are better at that than others. But there are things that you can teach people, like: think about what you’re interested in in the world. Think about when you look out at the world, what’s puzzling? What’s exciting? What do you want to understand? And the students I have, who I think are going to be the most successful, are those who are always asking those questions and not asking the question of, you know, “I’m going to read 17 issues of the American Economic Review and figure out what’s the kind of niche thing that I could do that would be a small innovation on someone else’s thing.” Those are good papers. We all write some of those papers. But I think that the most successful people are those who say, “Hey, what do I want to know about? And how can I be the one to find that out?”

JENA: So, Emily, let me follow up on that. Do you do anything yourself proactive to come up with ideas? Or do you just kind of live your life, and then, you know, you’re walking around and you’re seeing something that, you know, your kid did and you just come up with an idea?

OSTER: I think at this point, most projects kind of come out of some other project. So, it’s sort of that I’m doing something and then I wonder about something and then it leads to something else. I don’t ever do the thing where I sit down and I’m like, “Okay, now I’m going to come up with that idea.” Because I’ve tried that and it’s really — it’s deeply unsuccessful. I think, actually, Steve was the one who explained to me that would be deeply unsuccessful. But I, uh — but I still sometimes end up trying to do it and found he is correct.

JENA: Steve, before I ask you, let me just tell you I’m guessing I didn’t follow your advice. Because you know what I do, probably three times a week for about half an hour to an hour, usually when I’m driving to soccer, I’ll just jump on the phone with my research assistants and whoever’s working with me and we just have these idea sessions. And I’ll just start. I’ll jump on the phone and say, “All right, who’s got an idea?” And invariably, most of the ideas that people say, and that I say, are not going to be great. But once in a while there’s a nugget there. But I also think that there’s a causal effect of just that sort of discipline in that it helps me see things around me in ways that other people might not. And I don’t think it’s something that’s inherent in me, because if we can teach doctors how to transplant hearts from one human body to another, it’s hard to imagine that we can’t teach them how to think creatively. As a practical matter, I bet that you probably read The New York Times and Wall Street Journal and Washington Post. And I’ll read some of those things too, but I also read like People and Yahoo News not to dismiss the newsworthiness or credibility of Yahoo News, but I read those kinds of things because I feel like they kind of helped me get ideas as well. What are your thoughts on all this?

LEVITT: So, I agree with what both of you said a hundred percent. I would add a couple of small things. One is, Bapu, we don’t try to teach people to be creative in economics. There’s zero time devoted to it. Of course, you can teach people to be creative. When I was a young grad student, I actually spent a lot of time studying creativity and the advertising industry. And in particular, there’s a book called A Whack On The Side Of The Head. It’s just a book on creativity. It’s a fantastic book about ideas. So, I think we can teach it. The other thing I think that we don’t stress enough to grad students or to anyone interested in creativity, is you have to invest time. That most ideas are bad. And most of the grad students I know might spend an hour a month or less on ideas. They’re always working on something. They’re never looking at ideas. But I spent probably an hour a day on ideas and it wasn’t — like Emily says, it wasn’t sitting on the couch, because sitting on the couch didn’t turn out to be my best way of generating ideas. But with intentionality, like you’re talking about Bapu, I would seek out a conversation with somebody, who knew something about fruit flies because, who knows, maybe fruit flies would, would lead to someplace. 

OSTER: They don’t lead, I assure you.

LEVITT: Or I would just go into the stacks in the library and I would go to some section I would never want to go to usually. Like, you know, the Middle Ages, and I would just start opening books and looking for stuff. And I never knew where it would lead. And you don’t have to have very many good ideas. If I had three or four good ideas a year off of a 300- or 400-hour investment? I’d call that a great year. 

But even if you have a good idea, sometimes getting that idea out into the world — which in the academic world, often means getting it published in a good journal — doesn’t typically happen. And I get it: the ideas may have been bad. Or they just didn’t work out. 

But what if the seed of an idea was good, but maybe just executed poorly? What if another scientist has a better way of looking at the same thing — wouldn’t it be good if they were aware of these “failed” attempts? 

Some questions are worth answering no matter what the answer is, but for other questions, scientific journals are often focused on publishing so-called “positive findings.” It works like this: when you write a research paper, you typically have to provide a hypothesis — what do you think is going to happen when you run an experiment or analyze some data? If you confirm your hypothesis — that’s great! That’s a positive finding. And research shows you’re more likely to get it published. If you don’t? Well, that means you had what we call a null finding. You didn’t find anything. And journals aren’t usually as interested in publishing null results. You didn’t prove anything, after all! So, why dedicate limited page space and resources to a study that didn’t find anything? 

You can see how this creates a problem. Not only may people waste a lot of time looking at ideas that others have already looked at, but if journals favor positive findings, the scientific record becomes increasingly filled with positive findings. This is a problem that’s been known about for a long time. For example, in the late 1950s, a researcher named Ted Sterling found that 97 percent of papers in four major psychology journals had positive results. Recent analyses have shown that this publication bias, as Sterling called it, it still exists and could be worsening. 

So, knowing that for some types of ideas, a positive finding is important, how do we think about what makes for a good research idea? When it comes to good ideas, there are two buckets in my mind. The first bucket is questions that are so interesting and important that no matter what their answer is, positive or not, the answer is going to be important. For instance: do minimum-wage laws have an effect on employment? No matter what the answer is, people are probably going to want to hear about it.

The second bucket is for questions that really are only interesting if there is a positive relationship. Like, do gun injuries fall when the National Rifle Association holds its annual meeting? (It turns out, they do). But people are probably only going to want to know the answer to that question if the answer is an emphatic yes. Mostly out of preference, though, a lot of my research falls into that second category, and those questions tend to be kind of high risk. If I don’t find any relationship, there’s just no story there. Now, I have my fair share of research questions and projects that have failed for one reason or another. But it turns out that so do Emily and Steve.

LEVITT: I probably fail in 19 out of 20 projects. So, there’s an endless supply. So, one that really comes to mind though, is a particular project that actually relates to Emily because her husband, Jessie Shapiro, and Matt Gentzkow had done work on early T.V.

Jessie Shapiro is an economist at Brown University, and Matt Gentzkow is an economist at Stanford.

LEVITT: And I had done a lot of work looking at crime and why crime had fallen in the 1990s. And I had spent a lot of time trying to figure out why crime had gone up in the 1960s, but I could never figure it out. And I had this crazy idea. Maybe it was T.V., the advent of T.V., led to increased crime, for a variety of reasons, you might imagine why that might be. And interestingly, we did a fast first cut.

By “fast first cut,” Steve isn’t making a surgical reference, thank goodness. He means an initial analysis of some data.

LEVITT: It was with Matt Gentzkow. And we looked at it for a day and the results were unbelievable. I mean, they were fantastic. We had the first figure of what was going to be an amazing paper, explaining a puzzle no one could ever explain. And then it didn’t work. And every month Matt would say, “We should really stop working on this. This is stupid. It’s not working.” And I’d say, “No, no, no. I believe it. I feel this. It’s going to work.” And we probably worked on it for 18 months, at least 16 months too long. And it was a great example of in part because that fast first cut worked and partly because I really actually believed it in my heart that this was a true story, I wasted almost an infinite amount of time relative to any other failed project I’ve ever done.

JENA: So that was one. I’ll just give you two of mine. So, if you may remember, a few years ago, there was this data breach from something called Ashley Madison, this is a site for married people to find affairs. And so, I thought, “Okay, well, wait. If it’s really the case that a lot of men were using Ashley Madison, that data breach should have been extraordinarily stressful.” So, we looked at insurance claims data to see whether things like heart attacks, anxiety, medication, or whatever went up in the days after Ashley Madison breach occurred. And then we, subsetted it to people who were taking erectile dysfunction drugs. We tried all sorts of little things to figure out if there’s something there and there wasn’t. So, you know, in case you were wondering like, how does this fit with the mRNA technology? It’s a similar level of importance. And the other thing came from something I used to see in the hospital a lot. When we’d be taking care of these patients who were older, it’d often be the case that daughters were coming into the hospital to chat with the doctors and other healthcare providers about their parent, as opposed to the sons. And it made me wonder, do people with daughters live longer than people with sons? And it’s hard to do unless you have, like, Swedish data where you can link these families over time. But we did a rough cut with some data that’s available in the U.S. and didn’t find any evidence at all. So, whether you have a son or daughter may affect your quality of life and how you die, but it doesn’t affect your life expectancy. Those are two examples from my archives. Emily, go, and then I’m going to go back to you, Steve, for the second one.

OSTER: There’s a lot of projects that have died. But the thing that comes to mind is, at some point in the last several years, I got, like, very interested in scientific publishing and like the process of scientific publishing. And what people want to publish, how that relates to this sort of prevailing narrative. So, if people think something is true, are you more likely to see papers published reinforcing that? It relates to some work that was successful in sort of a related space. 

JENA: What were you trying to figure out? Publication bias or something?

OSTER: Yeah. Publication bias. But not the kind of publication bias in favor of significant findings, but publication bias in favor of some prevailing — like, if we all think something matters, do we publish papers to say it matters? Lik,e if it turns out then there’s a big study that says, oh, actually that was wrong, then do we publish a bunch of papers saying that it’s wrong? You know, is it hard to publish things that other people disagree with? And so I just kept having my R.A.s do different things involving scraping information about publications. And we would try something and it wouldn’t work. And I’d be like, “Oh, try this other thing.” And this went on for like a year before finally those R.A.s graduated and I was like, you know what? I’m not going to put other people on this. Like, I just have to let this go.

JENA: Let me just piggyback off that. So, I did this thing where we looked at people who worked in a Veterans Affairs hospital. And often V.A. researchers, they write about health care issues and quality-of-care issues in the V.A. system. And I thought “Hm, I bet that V.A. researchers, when they write studies about the V.A., are going to be finding things that are positive.” Like, “The V.A. does this good, the V.A. does that good, etc.” And we didn’t find that.  We actually wrote that up because I thought this question of ideological bias no matter what you found would be interesting. And we got it published. But it’s got me thinking about, like, COVID-19. Because if you think about the people’s preferences — let’s say based on masks — could it be the case that if you looked at studies about the effects of mask mandates or various non-pharmaceutical interventions and you, classify them as positive, meaning there’s an effect of them on slowing spread, versus negative, meaning there’s no effect. And then you look at the authors of those studies and look at whether or not they were wearing masks on social media before their study was published or whatever, would that be a way to identify a bias that people might have as they’re going into research? Because like you said, and Steve said, like, you have an idea about what’s the right answer and you’re pushing the data and pushing the analysis to try to get there. And you may not get there. Steve, you’ve got another example?

LEVITT: I do have one. So, somebody approached me and they asked me if I could study texting and driving and what the effect was. And I had never thought about the problem and I didn’t have any great ideas. So almost on a lark, I called up a friend I had at a big insurer and I said, “Hey, if I could link the data you have on driving to data on texting, would you be interested?” And he said, sure. And then I called up a really big telecom company, a friend I had there, and I said, “Hey, if I could link your data to this insurance data, would you be happy to do it?” And amazingly, they said, yes. And we developed this partnership where we were the trusted third party of these data that they wouldn’t give to the other company, but they would give to us. And we went to a lot of trouble to put it together and it wasn’t perfect, but it was pretty good. Now, part of the problem was the insurance company didn’t actually know when people crashed. They just knew when they slammed on the brakes. But it turned out that when people were texting, they just slowed way down. They hardly ever slammed on their brakes because I think they were driving so slow and were so far away from other cars that they probably weren’t driving very well, and they weren’t doing anything that was discernible that was dangerous. And in the end, we found that there was no impact of texting on any bad driving outcomes. And it could have been a good paper, but I think nobody would have believed it. And the companies we were working with weren’t very excited about it. And then my R.A. went off to grad school and the project never died, it just didn’t happen. It was one of those cases where this bias that arises in publication, which is we had a perfectly good result. We had a draft of a paper, but everybody just ran out of steam because we knew it’d be such a war to try and get it published because it would go so much against people’s priors, but not in a way that will make people excited, just in a way that will make them angry and frustrated. So, that’s a very different example of a paper failing. 

JENA: And you mean get angry and excited and text about it to their friends while they’re driving. 

LEVITT: At least they won’t hit anybody while they’re doing it because they’d be slowed down so much —

JENA: Exactly. Yeah.

OSTER: Going too slow.

Coming up next: when is it time to bail on a project? And: while it’s been fun hearing about all of these ideas — where they came from, what went wrong — we’ll try to figure out: is there something we should be doing with them? My conversation with economists Emily Oster and Steve Levitt continues, right after this.

*      *      *

So far, the economists Steve Levitt and Emily Oster and I have been sharing some of our research questions that failed to really see the light of day — whether it was because the data didn’t work out, or the research assistants helping us left for bigger and better things. Or we just knew a particular paper would never get accepted by an academic journal. We all pretty much have too many of these stories to count. But not all failures amount to months and months of wasted time. Here’s Emily, again.

OSTER: There are a lot of times where I spent a day or two in some dataset just trying to figure it out. So, you know, there was a period in which we had this, like, really interesting — I think I probably still have it on my computer — like, some really interesting data from India where they were doing tests like, did you know letters and could you read? And doing it for kids who were in school or not in school. It’s a data set that I used in various ways, but I remember spending a lot of days just, like, in that data sort of poking around and trying to figure out, okay, can I say something about, like, the relationship between knowing a little bit of math and knowing a little bit of reading and how does it evolve differently for different kinds of kids? And it just kind of never went anywhere. It’s not a failed project because there wasn’t really a project, but it was those moments of, I kind of have an idea or an inkling of something. I’m going to spend a little time looking at it. And then I really am just going to let it go at the end of a couple of days. And say, “You know what? There kind of wasn’t anything there and it’s just going to die.”

JENA: Yeah, Steve, how much time would you spend on an idea before you, let’s say, “gave up”? What’s your kind of cut off? When’s your drop-dead period?

LEVITT: I’m a huge believer in what I call a fast first cut. And I think there’s almost always a way to get some idea within a day about whether you have a good chance of succeeding or failing. I think a month is way too long unless you got a good reason for saying there’s no way to figure out in a day.

OSTER: When I was in grad school, David Laibson told us many projects are worth two weeks. Few projects are worth six months. Maybe I’m a little slower than Steve, but that really stuck with me. And I tell my students that a lot, but I think it’s hard you know, people get, like, into it. They want to clean the data. It’s hard for people to accept. You just got to, like, do something fast and see if it’s worth going forward.

JENA: Emily and Steve, I’m sure there’s plenty of people who’ve got really interesting ideas that the world won’t hear about. What do you think we should do about them? Is there a place where this stuff should go? Is it something that you think academics should value? Because I’d be shocked to think of any academic department that would value faculty writing up a bunch of interesting “null findings.” 

OSTER: There’s this pre-registration idea in clinical trials or in any kind of randomized experiments, right, where you sort of don’t want people to “file drawer” findings that are in the wrong direction or whatever it is. And so there’s sort of that push, but what you’re talking about is something different, which is like, “I had this idea. It was kind of insane. It turned out to be wrong. Shouldn’t people know about that?” And I think what I pushed back is like, “DO they need to know about that?” I don’t know.

LEVITT: So why are we worried about zeros not being published? One is because it gives people bad incentives to go and try to manufacture positive results where they aren’t really there. And that’s obviously a terrible incentive. People often say, “Yeah, it’s good to know the zeros because then other people don’t have to go and follow that same path.” But I actually think there’s nothing less valuable than researcher time. And people going and redoing it — it’s not the worst thing in the world because, you know, a lot of problems where you get zeros are actually maybe because you made a mistake. I mean, an easy way to get a zero is you messed up the data and then everything kind of leads back to zero. So — so, I wouldn’t want to discourage people from retreading the same ground. And I think when you get zeros, it often — it’s because you’re not thinking straight and it encouraged you to think better and to find something that’s not a zero in the same area. So — look, I’m less worried about the fact that zeros don’t get published. we know that everyone throws away their zeros. And so, you have to look with a lot of skepticism on what does get published. S,o as long as everyone knows that it’s a highly selected set of stuff that gets published, then I think the cost of throwing away these worthless zeros — I’m with Emily, it’s not that high. 

JENA: On the skepticism point: I don’t feel like academics rewards replication, right? I mean, there’s tons of studies that come out, we’re like, “Extraordinarily controversial, could be very important, would be great to have it replicated.” But we don’t typically see replications. And the one instance in my own history where I tried to replicate a controversial finding, which is this finding that patients do better when cardiologists go to their meetings. I replicated it at different clinical scenario, different set of meetings. And I thought, you know, people would want to know that that strange finding actually held up. I had, like, the hardest time getting it published. And the journals would just say, “Yeah, you know, you already published this. It’s not that interesting to show it again.” And I was like, “Well, that’s not the point. I’m going to try to show it again, per se, I’m trying to replicate it.” But I feel like the incentives for that kind of activity are very limited.

OSTER: I think that’s why a lot of —  certainly psychology, social psychology, which had like a little bit more of a crisis of faith around this, has, I think, pushed towards the idea that replication is something that happens almost outside of the standard journal publication. 

LEVITT: What is this outside mechanism? 

OSTER: So, there’s this group and they have people replicate studies. They provide a platform for people to surface replications and they actually finance some of them.  

LEVITT: Gotcha. 

JENA: Uh, just remind me, how do I not get on the hit list? I — I need to figure that out.

OSTER: Yeah.

LEVITT: You should want to be on the hit list, Bapu.

OSTER: Yeah, it sounds like — sounds like your stuff is all replicatable. It’s all perfect.

JENA: Steve, this question is going to go to you because you have the longest arc of the field. That’s another way, to say that you’re older than Emily and I. One thing I feel like I’ve seen is that economics has become in some respects, more creative as it applies to data, than it was, let’s say, 30 years ago. And I want to reflect on an analogous thing that we’ve seen in healthcare, which is that there has been this big push towards quality measurement, which is say, you know, measuring the quality of doctors or measuring the quality of hospitals. And that was all enabled by essentially improvements in computing and data. Like, all this information was available, so it allowed for this field to develop in a way that it otherwise wouldn’t have been able to. And my question for you is, do you think that availability of data has had an impact on the types of questions that people are asking in economics, particularly around how creative they are?

LEVITT: So, let’s just put this in economic terms. The cost of doing empirical research has plunged. I remember when I was first showing up at graduate school, Josh Angrist, who just won the Nobel prize, he was complaining because it used to be if you wanted to use the Census for anything, it took one R.A. roughly a summer to spin the data off of actual data tapes to get a data set together. And Josh was lamenting that, you know, with this new technology, you could do it in a day. And that was destroying his advantage because he had all sorts of R.A.s who could do this. And now anyone could do research on it. And he was saying that somewhat tongue in cheek, but it’s certainly true that it become much, much easier. And so, it used to be everybody used government data, because that was really the only data they had available. But now there’s enormous amounts of corporate data, there’s data that you can scrape. Now someone had the good idea to start using satellite data. So, there’s all sorts of technology. Now, whether that’s increased creativity, I don’t — I haven’t thought about that. I mean, certainly we have more options. My basic take on economics is not that it’s overwhelmed with creativity. I would actually say my feeling about economics in the recent past is that it’s really embraced technicality and sophisticated mathematics over creativity relative to what I remember when I was younger. What do you think, Emily?

OSTER: I think now people much more frequently start with the data. Like, I found this interesting dataset and they’re thinking about that as the starting point and then trying to build a question on top of it. And I’m not always sure that that is a way into the most creative ideas. And there’s a balance because if you just come up with some question and then there’s no way to answer it with data, well, that’s just impossible. But I do think this sort of availability of data, it’s like, you can do everything. It’s right there. And yet like, well, what are you going to do with it? 

LEVITT: Yeah Emily, I’ve never succeeded —  although I’ve tried many times — to start with a dataset and create an interesting project without an idea. And the other thing that happens to me a lot is people will reach out to me and they’ll say, “I have a dataset, an amazing dataset.” And I’ll sit there for an hour and I won’t have the slightest idea what to do with it. So, I have learned what the, what for me is the right question to ask when I’m faced with the dataset. And the question I ask myself is: what is unique about this dataset, this company, this situation that I couldn’t do with any other dataset? And if I can’t come up with something that answers that question, then I know I don’t have anywhere to go with that data.

JENA: Can we just agree that we’ll do this once a year so that you can hear any ideas that I have that didn’t work out in the 12 months?

OSTER: I will be back here in 12 months.

LEVITT: You’re so productive, Bapu, we should do it every six months.

OSTER: I agree.

JENA: You say six minutes or six months? 

LEVITT: No months. Every six months. You have enough bad ideas to fill a show every six months. 

JENA: Yeah, that’s right. Exactly. All right. All right. Thank you both so much.

LEVITT: Thank you everybody.

OSTER: Thanks everybody.

That’s it for today’s show. Many thanks to Emily Oster and Steve Levitt for taking the time to spill their “failed” research questions with me. I still found them super interesting and thought-provoking. 

By the way — there have been some attempts to combat this publication bias issue. A bunch of journals, like the aptly named Journal of Negative Results, exist solely to try to solve this problem. The American Journal of Gastroenterology dedicated its November 2016 and May 2020 issues to null findings. In 2019, the Berlin Institute of Health launched a program that offers researchers €1,000 to publish replication studies or null results. I wish I’d knew about that. Otherwise, I’d be a rich man!

And I’m ready to do my part! I’d love to hear what you thought about this episode, and if you have had any ideas or research questions that never really made it off the ground, send me an email at bapu@freakonomics.com. That’s B-A-P-U at freakonomics dot com. Thanks for listening.

*      *      *

Freakonomics, M.D. is part of the Freakonomics Radio Network, which also includes Freakonomics Radio, No Stupid Questions, and People I (Mostly) Admire. This show is produced by Stitcher and Renbud Radio. You can find us on Twitter and Instagram at @drbapupod. Original music composed by Luis Guerra. This episode was produced by Mary Diduch and mixed by Eleanor Osborne. We had help from Tracey Samuelson and Tricia Bobeda. Our staff also includes Alison Craiglow, Greg Rippin, Emma Tyrrell, Jasmin Klinger, Lyric Bowditch, Jacob Clemente, and Stephen Dubner. If you like this show, or any other show in the Freakonomics Radio Network, please recommend it to your family and friends. That’s the best way to support the podcasts you love. As always, thanks for listening. 

LEVITT: I think somebody did have a Pokémon Go paper, Bapu. 

JENA: Oh yeah, there was a Pokémon Go paper. Yeah. yeah.

LEVITT: My wife banned me from Pokémon Go when she caught me playing it while I was driving. 

JENA: My wife banned me from Ashley Madi — no, I’m joking. That’s a joke!

Read full Transcript

Sources

  • Emily Oster, professor of economics at Brown University.
  • Steve Levitt, professor of economics at the University of Chicago.

Resources

Extras

Episode Video

Comments