Search the Site

Calling All Predictors to a New Forecasting Tournament

Photo: Spike Mafford


One of the hour-long Freakonomics Radio shows we’re currently producing is about prediction — the science behind it, the human need for it, the folly it often produces.
One person you’ll likely hear from in the program is Philip Tetlock, a psychologist at Penn and author of the deservedly well-regarded book Expert Political Judgment. It is a rigorous romp through the minefield of expert prediction, and essentially argues that the words “expert” and “prediction” should almost never occupy the same sentence.
So it is exciting to learn that Tetlock and a few colleagues are embarking on an ambitious new study of prediction — and even better, that they’re looking for volunteers. Specifically, they’re looking for people “who have a serious interest in and knowledge about world affairs, politics, and global economic matters and are interested in testing their own forecasting and reasoning skills.” Doesn’t that sound like you? You need to be a U.S. citizen, 18 or older, with a college degree. The project even pays a small honorarium.
Here’s more information from Tetlock and colleagues:

We are writing with an unusual but, we hope, intriguing request. We are in the process of recruiting knowledgeable people to participate in a quite unprecedented study of forecasting sponsored by the Intelligence Advanced Research Projects Activity (“IARPA”) and focused on a wide range of political, economic and military trends around the globe. The goal of this unclassified project is to explore the effectiveness of techniques such as prediction markets, probability elicitation, training, incentives and aggregation that the research literature suggests offer some hope of helping forecasters see further and more reliably into the future.
To join, go to http://www.goodjudgment.info.
There are several teams recruiting participants. Ours builds on Phil Tetlock’s work described in his book Expert Political Judgment; his co-P.I.s are Barb Mellers and Don Moore, with an advisory board that includes Daniel Kahneman, Robert Jervis, Scott Armstrong, Michael Mauboussin, Carl Spetzler and Justin Wolfers. It involves a multi-disciplinary effort to understand how people use the knowledge they have to develop expectations about the future and what sorts of processes and strategies lead to success.
We need to recruit as many as 2,500 people who have a serious interest in and knowledge about world affairs, politics, and global economic matters and are interested in testing their own forecasting and reasoning skills. So please consider visiting the project web-site at http://www.goodjudgment.info. You will find what you need to register there and more information about the project.
We are committed to maintaining high standards for admission to this special program. And we would greatly welcome your participation if you are so inclined (please be advised that the minimum time commitment would be several hours in passing training exercises, grappling with forecasting problems, and updating your forecasting response to new evidence throughout the year).
The primary motivation for participating should be Socratic: self-knowledge. But we can also offer a token honorarium of $150 to each participant for completing each year in the forecasting tournaments. And, although all participants will be given key anonymity, the winners of the forecasting tournaments will know who they are and will be free to go public if they so wish.
Of course, we understand if you yourself do not have the time to engage in an exercise of this sort. But we would be very grateful if you are willing to pass this request on to colleagues and readers of your work who might be interested in participating in this program and who would be likely to qualify for admission.
We can promise the following: this will be an intellectually stimulating experience (indeed, should you be bored, you should drop out); participants will have the opportunity to work with state-of-the-art techniques (training and incentive systems) designed to augment accuracy; and participants will receive feedback on how well calibrated (among other things) their subjective probability judgments are in relation to others for various categories of problems.
In short, we think it will be fun. If we were not running it, we would volunteer ourselves.


Comments