Search the Site

Posts Tagged ‘forecasting’

Bring Your Questions for FiveThirtyEight Blogger Nate Silver, Author of The Signal and the Noise

Nate Silver first gained prominence for his rigorous analysis of baseball statistics. He became even more prominent for his rigorous analysis of elections, primarily via his FiveThirtyEight blog. (He has also turned up on this blog a few times.)

Now Silver has written his first book, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t. I have only read chunks so far but can already recommend it. (I would like to think his research included listening to our radio hour “The Folly of Prediction,” but I have no idea.)

A section of Signal about weather prediction was recently excerpted in the Times Magazine. Relatedly, his chapter called “A Climate of Healthy Skepticism” has already been attacked by the climate scientist Michael Mann. Given the stakes, emotions, and general unpredictability that surround climate change, I am guessing Silver will collect a few more such darts. (Yeah, we’ve been there.)



Freakonomics Poll: When It Comes to Predictions, Whom Do You Trust?

Our latest Freakonomics Radio podcast, “The Folly of Prediction,” is built around the premise that humans love to predict the future, but are generally terrible at it. (You can download/subscribe at iTunes, get the RSS feed, listen live via the media player above, or read the transcript here.)
There are a host of professions built around predicting some future outcome: from predicting the score of a sports match, to forecasting the weather for the weekend, to being able to tell what the stock market is going to do tomorrow. But is anyone actually good at it?



This Week in Corn Predictions: The USDA Got it Right (Almost)

We’ve been having some fun recently at the expense of people who like to predict things. In our hour-long Freakonomics Radio episode “The Folly of Prediction” — which will be available as a podcast in the fall — we showed that humans are lousy at predicting just about anything: the weather, the stock market, elections. In fact, even most experts are only nominally better than a coin flip at determining a future outcome. And yet there remains a huge demand for professional predictors and forecasters.
Earlier this week, Stephen Dubner and Kai Ryssdal chatted about this on the Freakonomics Radio segment on Marketplace. The question remains: “should bad predictions be punished?
As mentioned in the segment, the U.S. Department of Agriculture’s August crop yield report came out today. The result? Not bad actually. The corn yield forecast was revised downward by just 1.3% from its estimate last month. That’s a considerable improvement over last year’s big miss, when the August corn yield report had to be revised downward by almost 7%.



Sign Up for a Prediction Tournament

You may remember Phil Tetlock from our Freakonomics Radio hour-long episode “The Folly of Prediction.” He’s a psychologist at Penn and author of the deservedly well-regarded book Expert Political Judgment. Tetlock and some colleagues have embarked on an ambitious new study of prediction — and even better, they’re looking for volunteers. Specifically, they’re looking for people “who have a serious interest in and knowledge about world affairs, politics, and global economic matters and are interested in testing their own forecasting and reasoning skills.”
Doesn’t that sound like you? You need to be 18 or older, with a college degree. The project even pays a small honorarium. The start date has been pushed back to September, so you better act fast if you want in.
Here’s more information from Tetlock and colleagues:



Google's New Correlation Mining Tool: It Works!

You may have heard of Google Trends. It’s a cool tool which will show you the ups-and-downs of the public’s interest in a particular topic—at least as revealed in how often we search for it. And you may have even heard of the first really important use of this tool: Google Flu Trends, which uses search data to try to predict flu activity. Now Google has released an amazing way to reverse engineer the process: Google Correlate. Just feed in your favorite weekly time series (or cross-state comparisons), and it will tell you which search terms are most closely correlated with your data.
So I tried it out. And it works! Amazingly well.