Bring Your Questions for FiveThirtyEight Blogger Nate Silver, Author of The Signal and the Noise

Nate Silver first gained prominence for his rigorous analysis of baseball statistics. He became even more prominent for his rigorous analysis of elections, primarily via his FiveThirtyEight blog. (He has also turned up on this blog a few times.)

Now Silver has written his first book, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t. I have only read chunks so far but can already recommend it. (I would like to think his research included listening to our radio hour “The Folly of Prediction,” but I have no idea.)

A section of Signal about weather prediction was recently excerpted in the Times Magazine. Relatedly, his chapter called “A Climate of Healthy Skepticism” has already been attacked by the climate scientist Michael Mann. Given the stakes, emotions, and general unpredictability that surround climate change, I am guessing Silver will collect a few more such darts. (Yeah, we’ve been there.)

In the meantime, he has agreed to field questions about his new book from Freakonomics readers. So feel free to post your questions in the comments section below, and we’ll post his replies in short course. Here, to get you started, is the book’s table of contents:

 

1. A CATASTROPHIC FAILURE OF PREDICTION

2. ARE YOU SMARTER THAN A TELEVISION PUNDIT?

3. ALL I CARE ABOUT IS W’S AND L’S

4. FOR YEARS YOU’VE BEEN TELLING US THAT RAIN IS GREEN

5. DESPERATELY SEEKING SIGNAL

6. HOW TO DROWN IN THREE FEET OF WATER

7. ROLE MODELS

8. LESS AND LESS AND LESS WRONG

9. RAGE AGAINST THE MACHINES

10. THE POKER BUBBLE

11. IF YOU CAN’T BEAT ’EM . . . 

12. A CLIMATE OF HEALTHY SKEPTICISM

13. WHAT YOU DON’T KNOW CAN HURT YOU

 This post is no longer accepting comments. The answers to the Q&A can be found here.


RGJ

Hi Nate,

Baseball question, if you don't mind.

Why would a National League team, particularly one with financial constraints, not consider the following approach?

Sign no high priced starting pitchers. Fill your pitching roster with middle reliever/reliever pitchers with different attributes -- junk ballers, power pitchers, sidearmers -- with a lefty/righty balance.

Balance your roster with righty and lefty position players.

In the course of a game, substitute (except for blowouts) for the pitcher when his place comes up in the order, and do so with a righty or lefty batter depending on the pitcher.

Advantages:

-- You bring an American League offense to the National League.
-- In key situations, you have a disproportionate number of righty/lefty pitching matchups.
-- In key situations, you have a disproportionate number of righty/lefty fielding matchups.
-- Rather than seeing the same pitcher four or five times in a row, an opposing batter may face batters of different styles and handedness (sic?) in a game.
-- Keep payroll down by avoiding high priced starting pitchers.
-- Arguable, but may have an injury benefit by avoiding 100 plus pitch outings to pitchers.
-- keeps the entire roster active

Disadvantages/Challenges

-- will have trouble attracting pitchers focused on traditional counting stats and Ws.
-- will pose a pitching management challenge in rest and IP.

Naturally, in a blow out situation, you could simply have a young pitcher stretch out for 7 or 8 innings and save everyone else's arm. And it is important to note that now, when there is an upside blowout with a team's ace, those usual innings are pitched by him in search of the almighty W.

I ran this by Bill James on his website. His comment was that the only downside he saw was the part about attracting top starting pitchers. He also said that, in reality, baseball has been moving this way, albeit glacially, with the growth of middle relievers and closers and emphasis on righty/lefty situational matchups.

Thoughts?

Read more...

Laura Harrison McBride

Have you done any research on homeopathy...that is, why the allopathic medical community believes it doesn't work--despite more anecdotal evidence than you could fit in the Bodleian Library--insisting that only highly fallible so-called double-blind studies (the things that have given us at least six "miracle" drugs recalled for killing people at alarming rates) can prove a medicine's effectiveness?

Stuart

Are polls a very good predictor of actual behavior? It seems to me they are lousy indicators of future behavior and there are much better models of behavior (starting a diet, quitting smoking, beginning an exercise program, etc).

If I am right that polling is not a very good indicator of future behavior, then why do otherwise reputable organizations rely and amplify their "findings" in such polls and why do scientific journals publish polls in the same manner they publish true economic studies or models?

For instance, the Legacy Foundation recently conducted a major poll where 40 percent of respondents said if the FDA were to ban menthol cigarettes they would quit smoking. It would seem that an economic model based on cigarette taxes going up or some other model would do a much better job of predicting actual quitting smoking behavior in the event of such an FDA ban.

"A new study released today in the American Journal of Public Health (AJPH) presents the first peer-reviewed data on menthol smokers' behavioral intention if menthol cigarettes were taken off the market, a decision pending with the U.S. Food and Drug Administration (FDA). The results of the national survey show that nearly 40 percent of menthol smokers say they would quit if menthol cigarettes were no longer available."

http://www.prnewswire.com/news-releases/many-menthol-smokers-say-they-would-quit-if-menthols-were-no-longer-available-170563516.html

Read more...

James

Nate,

Is there any research on whether partisan political campaigning can prevent or delay an economic recovery? My theory is that "confidence" in the economy may have weakened as a direct consequence of the Rebublican primary season and heightened election-year politics in general. Republicans are very eager this year to tell us that the economy is bad under President Obama. Does that rhetoric actually cause the economy to be worse? Is economic growth harder to come by in election years in general? Maybe due to uncertainty?

Thanks for your great work,
James

mannyv

Q: what factors can help people see which predictions won't fail?

As an example, you use various models for 538. How do you know which set of models is correct at any given time?

David

Is there any circumstance where an event would shake up the political landscape so much you would pull the 538 forecast, and if so, can you give me a sense of what kind of magnitude it would need to be?

Bob

You mentioned in a recent blog that certain states are often highly correlated in elections. Do you specifically model these correlations or are the states uncorrelated in your simulations?

Michael J

How much of Romney's current deficit is due to factors beyond his control, and how much of it do you think is due to problems of his own making? Where do you think the forecast would stand right now if Republicans had managed to find a nominee with political skill along the lines of Clinton or Reagan?

Fionn O'Donovan

I'd like to hear Nate's view on this question: how might we reliably distinguish between accurate predictions that are accurate because their perpetrators used correct scientific methodology from those that appear simply to be lucky guesses?

I find this particularly interesting because philosophers of science since Imre Lakatos have attempted to show that we can tell if a theory is scientific if it can predict future events accurately - early physicists, for example, were able to predict the occurence of certain solar events. But it still seems possible, of course, that a person with no knowledge at all of a subject could still make an accurate prediction, or even a series of accurate predictions, simply by chance. How do we know where to draw the line?

Chris B.

The ability to make an accurate prediction given a random, representative sample is mathematically well known and turns out (as I remember from too many years ago of math) to be around 600 "samples" (people, microbes, division III referees). This group should tell us within some reasonable accuracy what an outcome should be (Akin pulls out a win, the African subcontinent suffers longer with malaria, three of Golden Tate's fingers on a ball constitutes simultaneous possession).

In practice, this just never seems to work out correctly. If it did, I believe all pollsters would hover around the exact same data points with fluctuations account for in their error-bars. So, why doesn't it?

Selection bias ("We sampled more than 5,000 married, white 2 children stay-at-home mothers with husbands earning over $350,000")
Question bias ("You say your a democrat? Do you think it's more likely Obama will raise taxes, or pull the still beating heart out of a human Mola Ram style?")
Mathematical bias ("I count in base 10, but then I add (unconverted) in base 12").

Why do polls vary significantly? Why are exit polls so dicey? And why, given what we know about the demographics of voters and their alignments in exit polls and their makeup of the electorate so hard to extrapolate to better data?

P.S. The next time you predict my Cleveland Indians to do well I wish you would bury that feeling way down deep and leave it unexpressed. You are to lucky talismans what the rabbit is to the rabbit's foot.

Read more...

Morgan Warstler

Sept 4th 2010, Nate told voters (via NYT) that the odds of a 60 seat gain by the GOP was only 1:4.

How badly does Obama lose, if Nate is exactly as wrong in the same direction as Nate was wrong in 2010?

And since Nate was that wrong once, IF Nate is that wrong again in 2012, shouldn't we deduce Nate has dramatically refactor his model to favor the right for than it currently does?

Dan Schroeder

What's your assessment of economics as a discipline, judged in terms of its ability to make politically useful predictions? For example, can economists predict with any reliability what the economic impact of a tax cut or a government spending program will be?

Grossamer

Is it possible that the Republicans redistricting has become so effective that it is backfiring on moderate republicans and has allowed the far right to become more powerful? If this is the case and Obama wins, it would seem that redistricting was a factor. Yes? No?

J. Cross

Way back in 2005, Steven Levitt said: "My contention is that the secret to Oakland’s success has little to do with the things described in Moneyball, such as the emphasis on finding the skills in baseball that are good at producing runs, but not properly valued by the market."

To support this statement he said: "The reason the A’s win, year after year, is because they have better pitchers than anyone else. the 2004 season is typical: the A’s were ninth out of fifteen teams in the American League in scoring runs, but had the second lowest ERA."

Did Levitt forget about park adjustments? Who was right Lewis or Levitt?

Liza Brings

Every election cycle has something unique that could be huge. 2008 the Bradley effect. 2012 stock mkt up, emp down. How do you factor these in?

John

- anxiously waiting for release tomorrow on Amazon /Kindle so haven't read yet but wondering:

When \ predictions involve human 'systems' & behavior ( social, economic, political etc) that are by their very nature 'adaptive', how do you deal with the tricky "Heisenberg Principle" -like effect where the very act of predicting itself becomes a factor that adds information that alters the system and influences individual and/or collective behavior?

Grenouille

Given that your political predictions rely on polling, how much faith do you have in pollsters asking good questions? Granted the question "who would you vote for..." is straight forward enough, but questions like "do you feel better off than you were 4 years ago..." and "who do you trust..." etc. can all be very biased or vague questions when asked in a certain way. I guess what I am asking is, how far can we reasonably trust in the accuracy of a poll when there may be flaws in the delivery of the survey that either purposely or accidentally sway the response?