New Freakonomics Radio Podcast: The Folly of Prediction

Listen now:

Fact: Human beings love to predict the future.

Fact: Human beings are not very good at predicting the future.

Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.

But wouldn’t it be nice if it did?

That is the gist of our latest Freakonomics Radio podcast, called “The Folly of Prediction.” This is the fourth of five hour-long podcasts we’ve been releasing recently. Some of you may have heard them on public-radio stations around the country, but now all the hours are being fed into our podcast stream. (You can download/subscribe at iTunes, get the RSS feed, listen live via the media player above, or read the transcript here.)

We explore quite a few realms of prediction — most unsuccessful, some more so — and you’ll hear from quite a variety of people, probably more than in any other show. Among them:

+ Vlad Mixich, a reporter in Bucharest, who describes how the Romanian “witch” industry (fortune-tellers, really) has been under attack — including a proposal to fine and imprison witches if their predictions turn out to be false.

+ Steve Levitt (you’ve maybe heard of him?) explains why bad predictions abound:

LEVITT: So, most predictions we remember are ones which were fabulously, wildly unexpected and then came true. Now, the person who makes that prediction has a strong incentive to remind everyone that they made that crazy prediction which came true. If you look at all the people, the economists, who talked about the financial crisis ahead of time, those guys harp on it constantly. “I was right, I was right, I was right.” But if you’re wrong, there’s no person on the other side of the transaction who draws any real benefit from embarrassing you by bring up the bad prediction over and over. So there’s nobody who has a strong incentive, usually, to go back and say, Here’s the list of the 118 predictions that were false. … And without any sort of market mechanism or incentive for keeping the prediction makers honest, there’s lots of incentive to go out and to make these wild predictions.

Phil Tetlock found that expert predictors aren’t very expert at all.

+ Philip Tetlock, a psychology professor at Penn and author of Expert Political Judgment (here’s some info on Tetlock’s latest forecasting project) provides a strong empirical argument for just how bad we are at predicting. He conducted a long-running experiment that asked nearly 300 political experts to make a variety of forecasts about dozens of countries around the world. After tracking the accuracy of about 80,000 predictions over the course of 20 years, Tetlock found …

TETLOCK: That experts thought they knew more than they knew. That there was a systematic gap between subjective probabilities that experts were assigning to possible futures and the objective likelihoods of those futures materializing … With respect to how they did relative to, say, a baseline group of Berkeley undergraduates making predictions, they did somewhat better than that. How did they do relative to purely random guessing strategy? Well, they did a little bit better than that, but not as much as you might hope …

Christina Fang, whose research offers evidence that the people who correctly predict extreme outcomes are, on average, bad predictors.

+ Christina Fang, a professor of management at NYU’s Stern business school, also gives us a good empirical take on predictive failure. She wanted to know about the people who make bold economic predictions that carry price tags in the many millions or even billions of dollars. Along with co-author Jerker Denrell, Fang gathered data from the Wall Street Journal’s Survey of Economic Forecasts to measure the success of these influential financial experts. (Their resulting paper is called “Predicting the Next Big Thing: Success as a Signal of Poor Judgement.”) The takeaway: the big voices you hear making bold predictions are less trustworthy than average:

FANG: In the Wall Street Journal survey, if you look at the extreme outcomes, either extremely bad outcomes or extremely good outcomes, you see that those people who correctly predicted either extremely good or extremely bad outcomes, they’re likely to have overall lower level of accuracy. In other words, they’re doing poorer in general. … Our research suggests that for someone who has successfully predicted those events, we are going to predict that they are not likely to repeat their success very often. In other words, their overall capability is likely to be not as impressive as their apparent success seems to be.

+ Hayes Davenport, a Freakonomics researcher takes a look at the predictive prowess of NFL pundits. (Short answer: not so good.)

How hard is it to accurately forecast something as simple as corn yield? (Photo by Tim Boyle/Getty Images)

+ Joe Prusacki directs the statistics division at the USDA’s National Agricultural Statistics Service, which means he helps make crop forecasts (read a primer here). He talks us through the process, and how bad forecasts inevitably produce some nasty e-mails:

PRUSACKI: Okay, the first one is: “Thanks a lot for collapsing the grain market today with your stupid” — and the word is three letters, begins with an “a” and then it has two dollar signs — “USDA report” … “As bad as the stench of dead bodies in Haiti must be, it can’t even compare to the foul stench of corruption emanating from our federal government in Washington, D.C.”

Nassim Taleb asks: Are you the butcher, or are you the turkey?

+ Our old friend Nassim Taleb (author of Fooled By Randomness and The Black Swan) shares a bit of his substantial wisdom as we ponder the fact that our need for prediction (and our disappointment when it fails) grows ever stronger as the world becomes more rational and routinized.

 

+ Tim Westergren, a co-founder of Pandora (whom you may remember from this podcast about customized education), talks through Pandora’s ability to predict what kind of music people want to hear based on what we already know we like:

WESTERGREN: I wouldn’t make the claim that Pandora can map your emotional persona. And I also don’t think frankly that Pandora can predict a hit because I think it is very hard, it’s a bit of a magic, that’s what makes music so fantastic. So, I think that we know our limitations, but within those limitations I think that we make it much, much more likely that you’re going to find that song that just really touches you.

Robin Hanson, an economist at George Mason University argues that prediction markets are the way to go.

 

+ Robin Hanson, an economist at George Mason University and an avowed advocate of prediction markets, argues that such markets address the pesky incentive problems of the old-time prediction industry:

 

HANSON: So a prediction market gives people an incentive, a clear personal incentive, to be right and not wrong. Equally important, it gives people an incentive to shut up when they don’t know, which is often a problem with many of our other institutions. So if you as a reporter call up almost any academic and ask them vaguely related questions, they’ll typically try to answer them, just because they want to be heard. But in a prediction market most people don’t speak up. So in most of these prediction markets what we want is the few people who know the best to speak up and everybody else to shut up.

I hope you enjoy the hour. It was a most interesting exploration from our end. Thanks to the many, many folks who lent a hand and to our roster of truly excellent guests. See you on the radio.


Aidan

Dan Gardner, a journalist up here in Canada, has written an amazing book exploring the troubles with predictions, coming to many of the same conclusions as the ones in this podcast. I'd highly recommend his book as an easy-to-read overview of why people suck at predicting things.

http://www.amazon.ca/Future-Babble-Expert-Predictions-Believe/dp/0771035195

Truly a great read that will have you doubting anything anyone ever writes or says or declares on twitter, about the future.

Alex

Funny listening to this episode today and then seeing this tonight: http://abcnews.go.com/US/north-carolina-bans-latest-science-rising-sea-level/story?id=16913782

Quite the session on real and false supply, demand and incentives!

Great episode, I hadn't heard it when originally run.

Julia

Missed this entertaining podcast the first time around. Just heard the rebroadcast and thought of this recent clip on the Daily Show where John Oliver calls out Chris Matthews and his terrible political predictions. Too bad there's not more of this type of reporting!! Ha!

http://www.thedailyshow.com/watch/mon-august-12-2013/can-t-you-at-least-wait-until-jon-stewart-gets-back---2016-presidential-election-coverage

Ian Seymour

Did anyone predict that the prediction market websites would be shut down?
Seems an odd episode to rebroadcast when Intrade is being investigated for gambling.