The Folly of Prediction

Season 1, Episode 4

Fact: Human beings love to predict the future.

Fact: Human beings are not very good at predicting the future.

Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.

But wouldn’t it be nice if it did?

That is the gist of our latest hour-long special of Freakonomics Radio, called “The Folly of Prediction.” (You can listen or download via the player above or read a transcript here. This program and four more hours are being broadcast on public-radio stations across the country (see this map to find your nearest station), and they’ll all wind up in our podcast stream in short course. Subscribe to the Freakonomics Radio podcast on iTunes or via RSS.)

We explore quite a few realms of prediction — most unsuccessful, some more so — and you’ll hear from quite a variety of people, probably more than in any other show. Among them:

+ Vlad Mixich, a reporter in Bucharest, who describes how the Romanian “witch” industry (fortune-tellers, really) has been under attack — including a proposal to fine and imprison witches if their predictions turn out to be false.

+ Steve Levitt (you’ve maybe heard of him?) explains why bad predictions abound:

LEVITT: So, most predictions we remember are ones which were fabulously, wildly unexpected and then came true. Now, the person who makes that prediction has a strong incentive to remind everyone that they made that crazy prediction which came true. If you look at all the people, the economists, who talked about the financial crisis ahead of time, those guys harp on it constantly. “I was right, I was right, I was right.” But if you’re wrong, there’s no person on the other side of the transaction who draws any real benefit from embarrassing you by bring up the bad prediction over and over. So there’s nobody who has a strong incentive, usually, to go back and say, Here’s the list of the 118 predictions that were false. … And without any sort of market mechanism or incentive for keeping the prediction makers honest, there’s lots of incentive to go out and to make these wild predictions.

Phil Tetlock found that expert predictors aren't very expert at all.

+ Philip Tetlock, a psychology professor at Penn and author of Expert Political Judgment (here’s some info on Tetlock’s latest forecasting project) provides a strong empirical argument for just how bad we are at predicting. He conducted a long-running experiment that asked nearly 300 political experts to make a variety of forecasts about dozens of countries around the world. After tracking the accuracy of about 80,000 predictions over the course of 20 years, Tetlock found …

TETLOCK: That experts thought they knew more than they knew. That there was a systematic gap between subjective probabilities that experts were assigning to possible futures and the objective likelihoods of those futures materializing … With respect to how they did relative to, say, a baseline group of Berkeley undergraduates making predictions, they did somewhat better than that. How did they do relative to purely random guessing strategy? Well, they did a little bit better than that, but not as much as you might hope …

Christina Fang, whose research offers evidence that the people who correctly predict extreme outcomes are, on average, bad predictors.

+ Christina Fang, a professor of management at NYU’s Stern business school, also gives us a good empirical take on predictive failure. She wanted to know about the people who make bold economic predictions that carry price tags in the many millions or even billions of dollars. Along with co-author Jerker Denrell, Fang gathered data from the Wall Street Journal’s Survey of Economic Forecasts to measure the success of these influential financial experts. (Their resulting paper is called “Predicting the Next Big Thing: Success as a Signal of Poor Judgement.”) The takeaway: the big voices you hear making bold predictions are less trustworthy than average:

FANG: In the Wall Street Journal survey, if you look at the extreme outcomes, either extremely bad outcomes or extremely good outcomes, you see that those people who correctly predicted either extremely good or extremely bad outcomes, they’re likely to have overall lower level of accuracy. In other words, they’re doing poorer in general. … Our research suggests that for someone who has successfully predicted those events, we are going to predict that they are not likely to repeat their success very often. In other words, their overall capability is likely to be not as impressive as their apparent success seems to be.

+ Hayes Davenport, a Freakonomics researcher (earlier work here, blogs here) takes a look at the predictive prowess of NFL pundits. (Short answer: not so good.)

How hard is it to accurately forecast something as simple as corn yield? (Photo by Tim Boyle/Getty Images)

+ Joe Prusacki directs the statistics division at the USDA’s National Agricultural Statistics Service, which means he helps make crop forecasts (read a primer here). He talks us through the process, and how bad forecasts inevitably produce some nasty e-mails:

PRUSACKI: Okay, the first one is: “Thanks a lot for collapsing the grain market today with your stupid” — and the word is three letters, begins with an “a” and then it has two dollar signs — “USDA report” … “As bad as the stench of dead bodies in Haiti must be, it can’t even compare to the foul stench of corruption emanating from our federal government in Washington, D.C.”

Nassim Taleb asks: Are you the butcher, or are you the turkey?

+ Our old friend Nassim Taleb (author of Fooled By Randomness and The Black Swan) shares a bit of his substantial wisdom as we ponder the fact that our need for prediction (and our disappointment when it fails) grows ever stronger as the world becomes more rational and routinized.

 

+ Tim Westergren, a co-founder of Pandora (whom you may remember from this podcast about customized education), talks through Pandora’s ability to predict what kind of music people want to hear based on what we already know we like:

WESTERGREN: I wouldn’t make the claim that Pandora can map your emotional persona. And I also don’t think frankly that Pandora can predict a hit because I think it is very hard, it’s a bit of a magic, that’s what makes music so fantastic. So, I think that we know our limitations, but within those limitations I think that we make it much, much more likely that you’re going to find that song that just really touches you.

Robin Hanson, an economist at George Mason University argues that prediction markets are the way to go.

 

+ Robin Hanson, an economist at George Mason University and an avowed advocate of prediction markets, argues that such markets address the pesky incentive problems of the old-time prediction industry:

 

HANSON: So a prediction market gives people an incentive, a clear personal incentive, to be right and not wrong. Equally important, it gives people an incentive to shut up when they don’t know, which is often a problem with many of our other institutions. So if you as a reporter call up almost any academic and ask them vaguely related questions, they’ll typically try to answer them, just because they want to be heard. But in a prediction market most people don’t speak up. So in most of these prediction markets what we want is the few people who know the best to speak up and everybody else to shut up.

I hope you enjoy the hour. It was a most interesting exploration from our end. Thanks to the many, many folks who lent a hand and to our roster of truly excellent guests. See you on the radio.


Doug Van Ede

I wonder if anyone has looked at the Army Corp of Engineers ability (or inability) to perdict the amount of water in the Missiouri river basin and their ability (or inability) to control it.

People in SD and IA have a perception that they made a very bad call.

I wonder if that is a fair perception.

Brian

Another source of good info on the subject -

"Future Babble" by Dan Gardner-

Why expert predictions fail, and why we believe them anyway.

KoKo the Talking Ape

Hi. It is extraordinarily difficult to find any given piece of information inside this or any show. Part of the problem is that the show is organized as narrative, returning to an interviewee or topic not when it is logical, but whenever it makes sense in the story (e.g., moving toward a large point, or a summarizing quotation.) It would help if you guys published transcripts of the shows (or do you?)

Thanks!

Matthew Philips

Here you go:

http://www.freakonomics.com/2011/06/30/the-folly-of-prediction-full-transcript/

Nikki

Where can I find a list of the songs used in this episode?

Taylor Marks

I write an app that predicts when the batteries in your mouse will die months in advance. I wrote an estimation algorithm for it (a kind of rolling average / linear approximation). In testing, I found that its predictions can swing wildly from day to day, but everyone always writes about how fantastic it is at making predictions and how helpful it is. Nobody ever complains. I wonder what would happen if I tossed out the current algorithm and just had it give random predictions that were within a month.

... probably wouldn't go even remotely well. My algorithm is definitely far better than straight random. Here's what MacWorld wrote about it in their magazine: http://www.macworld.com/article/1167554/battery_status_displays_the_battery_levels_of_your_macs_connected_hardware.html

Cristobal

I work on a company that grows cherry tomatoes.. Forecasting our volume 48 hours in advance has an error of more than 10%!!

The error that the USDA has is minimal!! I will read much more on their methodology and find if we can apply some of it..

People tend to believe that plants output depends very little on weather... Probably they forgot about Photosynthesis! Plants need Water, Nutrients, Light and heat to convert CO2 and HO2 in to O and H2CO (Carbohydrates)

Great work!