False Positive Science: Why We Can't Predict the Future
This is a guest post from Roger Pielke, Jr., a professor of environmental studies at the University of Colorado at Boulder. Check out Pielke’s blogs for more on the perils of predicting and “false positive science.”
Sports provide a powerful laboratory for social science research. In fact, they can often be a better place for research than real laboratories because sports provide a controlled setting in which people make frequent, real decisions, allowing for the collection of copious amounts of data. For instance, last summer, Daniel Hamermesh and colleagues used a database of more than 3.5 million pitches thrown in major league baseball games from 2004-2008 to identify biases in umpire, batter, and pitcher decision making. Similarly, Devin Pope and Maurice Schweitzer from the Wharton School used a dataset of 2.5 million putts by PGA golfers over five years to demonstrate loss aversion – golfers made more of the same-length putts when putting for par or worse than for birdie or better. Such studies tell us something about how we behave and make decisions in settings outside of sports as well.
A paper featured on the Freakonomics blog last week provided another lesson – a cautionary tale about the use of statistics in social science research to make predictions about the future. The paper, by Dan Johnson of Colorado College and Ayfer Ali, assembled an impressive dataset on Olympic medal performance by nations in the Winter and Summer Games since 1952. Using that data, the paper performed a number of statistical tests to explore relationships with variables such as population, GDP, and even the number of days of frost in a country (to test for the presence of wintry conditions).
The authors found a number of strong correlations between variables, which they called “intuitive,” such as the fact that rich countries win more medals, and nations with snowy winters do better in the Winter games. But the authors then commit a common social science error by concluding that the high correlations give “surprisingly accurate predictions beyond the historical sample.” In fact, the correlations performed quite poorly as predictors of medal outcomes, as I showed in an analysis on my blog. In fact, simply taking the results from the previous Olympic Games as a predictor of the following Games provides better predictions than the multivariate regression equation that Johnson and Ali have derived.
What we have here is an illustration of what has more generally been called “false positive science” by Joseph Simmons and colleagues in a 2011 paper. They argue that “it is unacceptably easy to publish ‘statistically significant’ evidence consistent with any hypothesis.” The fact that a statistically sophisticated model of Olympic medals leads to predictions that perform worse than a naïve prediction based on the results of the immediately previous Games should tell us that there is in fact a lot going on that is not accounted for in the statistical model developed by Johnson and Ali. Does such a poorly-performing statistical model provide much insight beyond “intuition”? I’m not so sure.
More generally, while anyone can offer a prediction of the future, providing a prediction that improves upon a naïve expectation is far more difficult. Whether it is your mutual fund manager who is seeking to outperform an index fund, or a weather forecaster trying to beat climatology, we should judge forecasts by their ability to improve upon simple expectations. If we can’t beat simple expectations in the controlled environment of forecasting the outcomes of a sporting event, we should have some considerable degree of skepticism when interpreting predictions related to the far more complex settings of the economy and human behavior more generally.
Roger Pielke Jr. is a professor of environmental studies at the University of Colorado where he studies science, technology and decision making. Lately, he has been studying the governance of sport. His most recent book is The Climate Fix.
Comments