The Downside of Research: How Small Uncertainties Can Lead to Big Differences

Contrary to popular perception, most research yields very few conclusions with 100 percent certainty. That's why you'll often hear economists state their conclusions with "95 percent certainty." It means they're pretty sure, but there's still a small margin for error. The science of climate change is no different, and, according to a Washington Post blog post, scientists are currently struggling with how to explain that uncertainty to the public. "What do you do when there’s a small but real chance that global warming could lead to a catastrophe?" asks Brad Plumer. "How do you talk about that in a way that’s useful to policymakers?"

The 21st Century Another American Century? Don't Bet on It

In conjunction with our latest Freakonomics Radio podcast, "The Folly of Prediction," I decided to reach out to a former professor of mine, Raymond Horton, whose modern political economy class is a student favorite at Columbia Business School. I wanted to know what Horton thought the worst prediction ever was, particularly regarding the intersection of politics and economics. He immediately pointed to a Foreign Affairs essay written by Mortimer Zuckerman in 1998, in which Zuckerman boldly lays out the case that, like the 20th century, the 21st will also be marked by American dominance.

We're barely a decade into the new century, so you may think it's too early to pass judgment on Zuckerman's prediction. But given the way things have played out over the last several years, it does look to be on shaky ground. At least that's the opinion of Ray Horton.

Once you've finished reading Horton's essay, we'd love to hear what you think count as some of the worst predictions ever.

An Algorithm that Can Predict Weather a Year in Advance

In our latest podcast, "The Folly of Prediction," we poke fun at the whole notion of forecasting. The basic gist is: whether it's Romanian witches or Wall Street quant wizards, though we love to predict things -- we're generally terrible at it. (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.)

But there is one emerging tool that's greatly enhancing our ability to predict: algorithms. Toward the end of the podcast, Dubner talks to Tim Westergren, a co-founder of Pandora Radio, about how the company's algorithm is able to predict what kind of music people want to hear, by breaking songs down to their basic components. We've written a lot about algorithms, and the potential they have to vastly change our life through customization, and perhaps satisfy our demand for predictions with some robust results.

One of the first things that comes to mind when people hear the word forecasting is the weather. Over the last few decades, we've gotten much better at predicting the weather. But what if through algorithms, we could extend our range of accuracy, and say, predict the weather up to a year in advance? That'd be pretty cool, right? And probably worth a bit of money too.

That's essentially what the folks at a small company called Weather Trends International are doing. The private firm based in Bethlehem, PA, uses technology first developed in the early 1990s, to project temperature, precipitation and snowfall trends up to a year ahead, all around the world, with more than 80% accuracy.

Italian Seismologists Charged with Manslaughter for Not Predicting Earthquake

In our latest Freakonomics Radio podcast, "The Folly of Prediction," we talk about the incentives behind making predictions, and how wrong predictions often go unpunished, which is why people make so many of them.

But recent news out of Italy seems to take the premise of punishing bad predictions a bit too far. From the New York Times:

Seven Italian seismologists and scientists went on trial on manslaughter charges on Tuesday, accused of not adequately warning residents of a central Italian region before an earthquake that killed 309 people in April 2009. Prosecutors say that the seven defendants, members of a national panel that assesses major risks, played down the risk of a major earthquake’s occurring even though there had been significant seismic activity near L’Aquila, the capital of the Abruzzo region, in the months before the quake.

Am I Good Enough to Compete In a Prediction Tournament?

Last spring, we posted on Phil Tetlock’s massive prediction tournament: Good Judgment. You might remember Tetlock from our latest Freakonomics Radio podcast, “The Folly of Prediction.” (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.)

Tetlock is a psychologist at the University of Pennsylvania, well-known for his book Expert Political Judgment, in which he tracked 80,000 predictions over the course of 20 years. Turns out that humans are not great at predicting the future, and experts do just a bit better than a random guessing strategy.

Picking the NFL Playoffs: How the Experts Fumble the Snap

Our latest Freakonomics Radio podcast, “The Folly of Prediction,” is built around the premise that humans love to predict the future, but are generally terrible at it. (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.) But predictions about world politics and the economy are hard -- there are so many moving parts.

In the podcast, you'll hear from Freakonomics researcher Hayes Davenport, who ran the numbers for us on how accurate expert NFL pickings have been for the last 3 years. He put together a guest post for us on football predictions.

Picking the NFL Playoffs: How the Experts Fumble the Snap

As careers in journalism go, making preseason NFL predictions is about as safe as they come these days. The picks you make in August can't be reviewed for four months, and by that time almost nobody remembers or cares what any individual picker predicted. So when Stephen asked me to look at the success rate of NFL experts in predicting division winners at the beginning of the season, I was excited to look back at the last few years of picks and help offer this industry one of its first brushes with accountability.

Freakonomics Poll: When It Comes to Predictions, Whom Do You Trust?

Our latest Freakonomics Radio podcast, "The Folly of Prediction," is built around the premise that humans love to predict the future, but are generally terrible at it. (You can download/subscribe at iTunes, get the RSS feed, listen live via the media player above, or read the transcript here.)

There are a host of professions built around predicting some future outcome: from predicting the score of a sports match, to forecasting the weather for the weekend, to being able to tell what the stock market is going to do tomorrow. But is anyone actually good at it?

This Week in Corn Predictions: The USDA Got it Right (Almost)

We've been having some fun recently at the expense of people who like to predict things. In our hour-long Freakonomics Radio episode "The Folly of Prediction" -- which will be available as a podcast in the fall -- we showed that humans are lousy at predicting just about anything: the weather, the stock market, elections. In fact, even most experts are only nominally better than a coin flip at determining a future outcome. And yet there remains a huge demand for professional predictors and forecasters.

Earlier this week, Stephen Dubner and Kai Ryssdal chatted about this on the Freakonomics Radio segment on Marketplace. The question remains: "should bad predictions be punished?"

As mentioned in the segment, the U.S. Department of Agriculture's August crop yield report came out today. The result? Not bad actually. The corn yield forecast was revised downward by just 1.3% from its estimate last month. That's a considerable improvement over last year's big miss, when the August corn yield report had to be revised downward by almost 7%.

Should Bad Predictions Be Punished?

What do Wall Street forecasters and Romanian witches have in common? They usually get away, scot-free, with making bad predictions. Our world is awash in poor prediction -- but for some reason, we can't stop, even though accuracy rates often barely beat a coin toss.

But then there’s the U.S. Department of Agriculture's crop forecasting. Predictions covering a big crop like corn (U.S. farmers have planted the second largest crop since WWII this year) usually fall within five percent of the actual yield. So how do they do it? Every year, the U.S.D.A. sends thousands of enumerators into cornfields across the country where they inspect the plants, the conditions, and even "animal loss."

This week on Marketplace, Stephen J. Dubner and Kai Ryssdal talk about the supply and demand of predictions. You'll hear from Joseph Prusacki, the head of U.S.D.A's Statistics Division, who's gearing up for his first major crop report of 2011 (the street is already "sweating" it); Phil Friedrichs, who collects cornfield data for the USDA; and our trusted economist and Freakonomics co-author Steven Levitt.

Surprise, Surprise: The Future Remains Hard to Predict

"There is a huge discrepancy between the data and the forecasts."

In what realm do you think this "huge discrepancy" exists? The financial markets? Politics? Pharmaceutical research?

Given how bad humans are at predicting the future, this discrepancy could exist just about anywhere. But the above quote, from the University of Alabama-Huntsville climate scientist Roy Spencer, is talking about computer models that predict global warming: