New Freakonomics Radio Podcast: The Folly of Prediction

Listen now:

Fact: Human beings love to predict the future.

Fact: Human beings are not very good at predicting the future.

Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.

But wouldn’t it be nice if it did?

That is the gist of our latest Freakonomics Radio podcast, called “The Folly of Prediction.” This is the fourth of five hour-long podcasts we’ve been releasing recently. Some of you may have heard them on public-radio stations around the country, but now all the hours are being fed into our podcast stream. (You can download/subscribe at iTunes, get the RSS feed, listen live via the media player above, or read the transcript here.)

We explore quite a few realms of prediction — most unsuccessful, some more so — and you’ll hear from quite a variety of people, probably more than in any other show. Among them:

+ Vlad Mixich, a reporter in Bucharest, who describes how the Romanian “witch” industry (fortune-tellers, really) has been under attack — including a proposal to fine and imprison witches if their predictions turn out to be false.

+ Steve Levitt (you’ve maybe heard of him?) explains why bad predictions abound:

LEVITT: So, most predictions we remember are ones which were fabulously, wildly unexpected and then came true. Now, the person who makes that prediction has a strong incentive to remind everyone that they made that crazy prediction which came true. If you look at all the people, the economists, who talked about the financial crisis ahead of time, those guys harp on it constantly. “I was right, I was right, I was right.” But if you’re wrong, there’s no person on the other side of the transaction who draws any real benefit from embarrassing you by bring up the bad prediction over and over. So there’s nobody who has a strong incentive, usually, to go back and say, Here’s the list of the 118 predictions that were false. … And without any sort of market mechanism or incentive for keeping the prediction makers honest, there’s lots of incentive to go out and to make these wild predictions.

Phil Tetlock found that expert predictors aren’t very expert at all.

+ Philip Tetlock, a psychology professor at Penn and author of Expert Political Judgment (here’s some info on Tetlock’s latest forecasting project) provides a strong empirical argument for just how bad we are at predicting. He conducted a long-running experiment that asked nearly 300 political experts to make a variety of forecasts about dozens of countries around the world. After tracking the accuracy of about 80,000 predictions over the course of 20 years, Tetlock found …

TETLOCK: That experts thought they knew more than they knew. That there was a systematic gap between subjective probabilities that experts were assigning to possible futures and the objective likelihoods of those futures materializing … With respect to how they did relative to, say, a baseline group of Berkeley undergraduates making predictions, they did somewhat better than that. How did they do relative to purely random guessing strategy? Well, they did a little bit better than that, but not as much as you might hope …

Christina Fang, whose research offers evidence that the people who correctly predict extreme outcomes are, on average, bad predictors.

+ Christina Fang, a professor of management at NYU’s Stern business school, also gives us a good empirical take on predictive failure. She wanted to know about the people who make bold economic predictions that carry price tags in the many millions or even billions of dollars. Along with co-author Jerker Denrell, Fang gathered data from the Wall Street Journal’s Survey of Economic Forecasts to measure the success of these influential financial experts. (Their resulting paper is called “Predicting the Next Big Thing: Success as a Signal of Poor Judgement.”) The takeaway: the big voices you hear making bold predictions are less trustworthy than average:

FANG: In the Wall Street Journal survey, if you look at the extreme outcomes, either extremely bad outcomes or extremely good outcomes, you see that those people who correctly predicted either extremely good or extremely bad outcomes, they’re likely to have overall lower level of accuracy. In other words, they’re doing poorer in general. … Our research suggests that for someone who has successfully predicted those events, we are going to predict that they are not likely to repeat their success very often. In other words, their overall capability is likely to be not as impressive as their apparent success seems to be.

+ Hayes Davenport, a Freakonomics researcher (earlier work here, blogs here) takes a look at the predictive prowess of NFL pundits. (Short answer: not so good.)

How hard is it to accurately forecast something as simple as corn yield? (Photo by Tim Boyle/Getty Images)

+ Joe Prusacki directs the statistics division at the USDA’s National Agricultural Statistics Service, which means he helps make crop forecasts (read a primer here). He talks us through the process, and how bad forecasts inevitably produce some nasty e-mails:

PRUSACKI: Okay, the first one is: “Thanks a lot for collapsing the grain market today with your stupid” — and the word is three letters, begins with an “a” and then it has two dollar signs — “USDA report” … “As bad as the stench of dead bodies in Haiti must be, it can’t even compare to the foul stench of corruption emanating from our federal government in Washington, D.C.”

Nassim Taleb asks: Are you the butcher, or are you the turkey?

+ Our old friend Nassim Taleb (author of Fooled By Randomness and The Black Swan) shares a bit of his substantial wisdom as we ponder the fact that our need for prediction (and our disappointment when it fails) grows ever stronger as the world becomes more rational and routinized.


+ Tim Westergren, a co-founder of Pandora (whom you may remember from this podcast about customized education), talks through Pandora’s ability to predict what kind of music people want to hear based on what we already know we like:

WESTERGREN: I wouldn’t make the claim that Pandora can map your emotional persona. And I also don’t think frankly that Pandora can predict a hit because I think it is very hard, it’s a bit of a magic, that’s what makes music so fantastic. So, I think that we know our limitations, but within those limitations I think that we make it much, much more likely that you’re going to find that song that just really touches you.

Robin Hanson, an economist at George Mason University argues that prediction markets are the way to go.


+ Robin Hanson, an economist at George Mason University and an avowed advocate of prediction markets, argues that such markets address the pesky incentive problems of the old-time prediction industry:


HANSON: So a prediction market gives people an incentive, a clear personal incentive, to be right and not wrong. Equally important, it gives people an incentive to shut up when they don’t know, which is often a problem with many of our other institutions. So if you as a reporter call up almost any academic and ask them vaguely related questions, they’ll typically try to answer them, just because they want to be heard. But in a prediction market most people don’t speak up. So in most of these prediction markets what we want is the few people who know the best to speak up and everybody else to shut up.

I hope you enjoy the hour. It was a most interesting exploration from our end. Thanks to the many, many folks who lent a hand and to our roster of truly excellent guests. See you on the radio.


I know many out there will dismiss this suggestion out of hand due to the person refered to. But why not factor in a large target like Rush Limbaugh? That talking head says often he's documented to be 98% accruate. He indicates he has an independent fact checker to verify his prognostications. He is the only talking head that touts his level of accuracy.

He certainly would be one who sticks to his dogma, according to this podcast is what makes the preditors so foolish.



Any thoughts on Bruce Bueno de Mesquita? Is he really that good at predicting, or do we only hear about the stuff that came true? I'm interested to know if his game-theoretic number crunching works, or if he is just very good at marketing the successes. He claims very high accuracy levels, but who knows what he's predicting most of the time.


As somebody who makes their living making predictions, I enjoyed the podcast. Although I'm trained in meteorology, I was (and am still) appalled at how little verification is done. Because of this, weather forecasting also falls victim to the blowhards who make extreme forecasts with little downside to being wrong.

However, not mentioned in the podcast are many people like me. I operate in near anonymity, forecasting everything from the probability of Gaddafi's overthrow to corn prices to the rate on German Bonds. If I'm right, I make money. If I'm wrong, I lose. We (I work with a group of like-minded math geeks) never disclose our forecasts to anyone and keep our methods secret. We are very different from the political pundits or weathermen on TV who have very little downside to being wrong and often have very little real skill.

Markets desperately want to know the future so they can better balance supply and demand. If I supply the markets with quality forecasts, I am richly compensated. However, if I'm wrong, the market demands that I pay.

It's a very pure way to make a living.



I'm sorry, is this podcast new? I feel like I already heard a podcast about prediction from Freakonomics.


Why is it that you illustrate your supposed folly of prediction theory with cherry-picked examples of trying to predict things which are inherently difficult or impossible to predict? Why not consider the vast number of things which are readily & successfully predicted? For instance, the times of sunrise & sunset, the tides, solar & lunar eclipses, all perfectly predictable. Or a prediction of my own: if I take the highway north of here between 3 and 6 pm on a weekday, I'll spend about 15 minutes stuck in traffic. I've had a better than 99% success rate on that over a decade or so.


I wonder could this propensity to predict UNPREDICTABLE events descend from early humans' needs to predict fairly predictable events? As you point out, nature has its own predictable rhythm. Perhaps understanding that helped early humans to hunt and gather successfully.

But maybe modern life is less predictable and our primitive desire to expect certain events hits modern uncertainty?


Exactly! Being able to predict where game would gather, when & where fruit would be ripe, or which tree the leopard lurks in, obviously would have considerable survival value. (And many of us who live in the country can still do it quite well, even though it's seldom a question of survival.)

We likewise quite successfully predict many things in "civilized" life, from road conditions to our chances of getting a particular job, or a date with that attractive person over there. The problem with many of the places where prediction fails is a sort of Heisenberg uncertainty, in which the act of predicting - say the stock market - changes the outcome.


There is a site that records and monitors the predictions of experts. Check out and you can see the predictions that the experts got right, and the ones they got horribly, horribly wrong.


As a history teacher, many former leaders have predicted the future of their country, from former presidents of the US to former Kings of England. Nothing that I have come across in my research is more interesting in terms of predictions then Rasputin, the mystic monk who gained the Tsar's ear. I copied his last letter (written to the Tsar's wife below).

I write and leave behind me this letter at St. Petersburg. I feel that I shall leave life before January 1st. (He died Dec 16th)

I wish to make known to the Russian people, to Papa, to the Russian Mother and to the children, to the land of Russia, what they must understand. If I am killed by common assassins, and especially by my brothers the Russian peasants, you, Tsar of Russia, have nothing to fear, remain on your throne and govern, and you, Russian Tsar, will have nothing to fear for your children, they will reign for hundreds of years in Russia. But if I am murdered by boyars, nobles, and if they shed my blood, their hands will remain soiled with my blood, for twenty-five years they will not wash their hands from my blood. They will leave Russia.

(He was murdered by royal blood)

Brothers will kill brothers, and they will kill each other and hate each other, and for twenty-five years there will be no nobles in the country. Tsar of the land of Russia, if you hear the sound of the bell which will tell you that Grigory has been killed, you must know this: if it was your relations who have wrought my death then no one of your family, that is to say, none of your children or relations will remain alive for more than two years. They will be killed by the Russian people...I shall be killed. I am no longer among the living.

(A large civil war broke out; All the Romanov family was executed, including the Tsar and his wife).

Pray, pray, be strong, think of your blessed family.


A Nother

Forget the future! People are equally unequipped to evaluate what's actually happening as it's happening, as well as explaining what has happened in the recent or distant past. As my dad likes to quote Dirty Harry: "A man's got to know his limitations." They are humbling, if not distressing. And they're compounded, at least as much as they're apparently overcome, by our interdependence.


It would be interesting to flesh out the economic prediction segment of this post more. Specifically, how can an entire industry more or less based on prediction continue to keep growing its share of out-sized compensation while they value of their product is provably so low? The article sheds a lot of light on this, but what explains the hugh scale, decade after decade?


"Specifically, how can an entire industry more or less based on prediction continue to keep growing its share of out-sized compensation while they value of their product is provably so low? "
You are confusing those in finance who make predictions in the media for no compensation with those who are hugely compensated for being successful.

Many money managers receive huge compensation for correct forecasts and are financially penalized for missing the forecasts. These people rarely forecast publicly. If it was easy, everyone would become a money manager. There are virtually no barriers to becoming a money manager. You don't need a license or education or connections to get started. Just put together a few bucks and get a track record going.

So, it ends up a lot like professional sports. A small group of very successful money "forecasters" ends up running most of the money and getting huge compensation. With huge assets under management, it becomes harder to succeed.

People get the idea that somehow people on Wall Street just get handed millions just for being in "the club". It doesn't work that way. There is a high correlation between skill and compensation.



Interesting and entertaining post. I do want to make a point though that I wouldn't use the word 'prediction' when it comes to Pandora's service of suggesting songs we might like based on songs we've already stated we like. It's a great service, I use it everyday, but it's not a 'prediction' engine. It's a sophisticated indexing and search engine, where your universe is the world of music, and where the basic search criteria is: give me songs that are more or less similar to these other songs. And also where my tastes and preferences - the main factor that determines how successful, or how accurate those results are - don't change quickly over time.

Derek Bruff

Good point. I felt the same way. Although the algorithms Pandora use have a similar flavor to prediction algorithms, Pandora doesn't use them to predict the future, just match something new with something known. There's no chronology involved in what Pandora does, as far as I know.


Interesting as always. One area that I would like to see explored that wasn't is how the predictors' predictions affect what they are predicting (try saying that 3 times fast...). For example, can Warren Buffett make a company profitable and successful simply by predicting that it will be profitable and investing in it, due to his stature in the investment community?

Derek Bruff

I'm so glad you included prediction markets in this episode. As I listened, I kept wondering if you'd get around to mentioning them. I've been fascinated by them since I read about them in James Surowiecki's "The Wisdom of Crowds." Contrasting prediction markets with other types of predictions, you made clear how important it is for there to be some downside to making bad predictions. Prediction markets handle this nicely.

The other reason they work better than most predictions is the crowdsourcing aspect. It's not one person predicting the outcome of, say, an election, it's hundreds or thousands. Surowiecki points out that prediction markets work best when they include a diversity of opinions. The market mechanism provides a way to "average" those opinions into a single prediction. The "error" in those predictions tends to be averaged out, leaving a good prediction.

Thanks for a great show.


Boris Azais

Worst prediction ever: 1967, US Surgeon General stated that, "the war on infectious diseases has been won". Eventually, HIV/AIDS, emerging and re-emerging pathogens (Dengue, West Niles, SARS, pandemic flu, malaria, multi-drug resistant Tuberculosis, etc.) proved him unfortunately wrong. Full review of the infectious diseases status quo: Morens, Folker & Fauci in Nature, Vol. 430, 8 July 2004, p.242.

More an aspiration than a prediction, the long series of US President speeches (starting in the 60s') about solving US's reliance on fossil fuels. Jon Stewart did a video cut of all these speeches in which each President (Nixon, Ford, Carter, Reagan, etc.) aimed at a 10+ year horizon for solving the problem. A moving target, as the latest deadline from Pdt Obama is 2020. A bit like the carton found in some bars: "Free Beer! Starting tomorrow"


I've just started listening to your podcasts and I really enjoyed that, good work!

One thing that surprised me was your claim that people don't tend to remember FAILED predictions. Off the top of my head I could remember three failed predictions from Britain which have been loudly remembered and discussed ever since. These are:

1) Former prime minister Gordon Brown repeatedly announcing, when he was the chancellor of the exchequer, that Britain had moved past the old 'boom and bust' economics. Here his words are remembered by The Guardian newspaper in 2008, after Britain was once again in a bust:

2) In 1987 BBC weatherman Michael Fish reassuring viewers that the remains of a hurricane heading for Britain would miss the country; instead it struck the south of England, killing 18 people. The video clip of Fish making the weather prediction is quite famous.

3) In 1995 football team Manchester United had famously not bought any new players, deciding instead to stock its team from young players brought up from their youth team. Commentator Alan Hansen dismissed their chances of success, remarking that: 'You'll never win anything with kids'. Manchester United proceeded to win the 'double' of the FA Cup and the Premier League. Hansen's remark was never forgotten, particularly by Man United fans who relished reminding him of it!

So I don't know if Britain is culturally different to the US in some way, and does tend to focus on poor predictions. Perhaps it is simply a coincidence that I remember these few examples.



Just a thought on predictions and sports. You brought up the NFL case study but an even better case study would be the NCAA basketball tournament. Millions of brackets filled out each year. Years of documented entree's both by experts and novices. Plus the fact that there hasn't been that many perfect brackets. Amazing. I would've liked to have seen that done.

Jake Tober

Why nothing about inventory management and purchasing?

-Logistics Manager and Econ Grad


Dan Gardner, a journalist up here in Canada, has written an amazing book exploring the troubles with predictions, coming to many of the same conclusions as the ones in this podcast. I'd highly recommend his book as an easy-to-read overview of why people suck at predicting things.

Truly a great read that will have you doubting anything anyone ever writes or says or declares on twitter, about the future.


Funny listening to this episode today and then seeing this tonight:

Quite the session on real and false supply, demand and incentives!

Great episode, I hadn't heard it when originally run.


Missed this entertaining podcast the first time around. Just heard the rebroadcast and thought of this recent clip on the Daily Show where John Oliver calls out Chris Matthews and his terrible political predictions. Too bad there's not more of this type of reporting!! Ha!

Ian Seymour

Did anyone predict that the prediction market websites would be shut down?
Seems an odd episode to rebroadcast when Intrade is being investigated for gambling.