Am I Good Enough to Compete In a Prediction Tournament?

(Stockbyte)

Last spring, we posted on Phil Tetlock’s massive prediction tournament: Good Judgment.  You might remember Tetlock from our latest Freakonomics Radio podcast, “The Folly of Prediction.”  (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.)

Tetlock is a psychologist at the University of Pennsylvania, well-known for his book Expert Political Judgment, in which he tracked 80,000 predictions over the course of 20 years. Turns out that humans are not great at predicting the future, and experts do just a bit better than a random guessing strategy.

Good Judgment is Tetlock’s latest project, an ambitious plan to put 2,500 volunteers to the test in a forecasting tournament sponsored by the U.S. government. The Good Judgment  research team includes Barb Mellers and Don Moore, with an advisory board of Daniel Kahneman, Robert Jervis, Scott Armstrong, Michael Mauboussin, Carl Spetzler and Justin Wolfers. The criteria for selection is for those “who have a serious interest in and knowledge about world affairs, politics, and global economic matters and are interested in testing their own forecasting and reasoning skills.”

Considering myself someone who fits this description, I signed up to represent Team Freakonomics in the tournament. I have an econ degree from the University of Chicago (and consider myself a decent tarot card reader) so, why not? Plus it pays $150 a year to answer some questions online, and at worst I could use a “random guessing strategy” that would give me pretty good odds in this game.

The entry process started innocently enough, with a survey designed to gauge one’s interest in forecasting. Then came a test of world knowledge, which was hard for a couple of reasons. First, Google searching isn’t allowed; additionally, the test is timed, so you’d barely have time for Google in any case. Second, the test questions asked for a range of how true a given statement might be. For instance: a certain country’s GDP is $X in a given year; how true is that claim? This made the exercise incredibly difficult, as I have enough knowledge to give an extreme answer – true or false – but not enough to give a more subtle one.

I also knew about “anchoring” (which Richard Thaler speaks about in our “Mouse in the Salad” podcast). My mind was probably playing tricks on me with answers that seemed right, but probably weren’t.

This whole process did a number on my self-esteem: if I didn’t know about the world here and now, how could I possibly predict the future? I ended up completing the survey in segments; luckily there were some LSAT-type logic questions at the end and a fun IQ-shapes game.

I know what I was thinking by the end: boy, this should definitely pay more than $150! And I’m not even in the tournament yet!

Thankfully, I got notice yesterday that I’ve been accepted into the tournament (phew!). So let the games begin! Now all that’s left is for me to decide on a strategy: random guessing or actually trying to predict the future. Given what Tetlock’s research shows, that’s a tough call.

Leave A Comment

Comments are moderated and generally will be posted if they are on-topic and not abusive.

 

COMMENTS: 15

View All Comments »
  1. Papa says:

    My experience has been a bit different, but I am enjoying the experiment so far. The only question that has closed was one I answered almost randomly, and got wrong. With a couple of questions I’ve gone from being pretty certain in one direction to being pretty certain in the opposite direction after new information comes in, in an embarrassingly short period of time. Quite easy to sit in one’s armchair and pontificate, quite humbling to be confronted with one’s inaccuracies.

    Thumb up 2 Thumb down 0
  2. Linda says:

    I was taken aback by how little I (thought I) knew about actual current affairs – too much reliance on being able to look up facts! I am curious to see how this plays out, though.
    As an afterthought: any body have any idea on the gender breakdown of the participants?

    Thumb up 1 Thumb down 0
    • Don Moore says:

      Without intending it to be so, we found our sample to be male-skewed. So while we value everyone’s participation, a special shout-out of appreciation to all the women who are taking part.

      Thumb up 3 Thumb down 0
  3. Jake says:

    I’m also participating in this event, and so far I have been quite impressed with the way they have it set up. I’m anxiously awaiting the results of the first round of questioning, so we can see who ends up being the best predictor. If the results turn out like I’d expect them to, I would love to see a similar system put in place for major media pundits to register their predictions for major events, so you could see who is the most trustworthy with their predictions. This would help give some accountability to the prediction market, as has been mentioned in several previous posts.

    Thumb up 1 Thumb down 0
    • Don Moore says:

      I completely agree that it would be great if professional prognosticators, especially whom the media give a high public profile, would make their predictions clear enough that we could actually be able to score their accuracy. One way to accomplish that would be if their media hosts would push for precisely such clarity.

      Phil Tetlock and Barb Mellers have written quite persuasively on the value of such clear scorable predictions from the intelligence community when it comes to national security. But for them, having the courage to be clear means putting their heads on the political chopping block.

      Thumb up 1 Thumb down 0
  4. Terry Murray says:

    As the Project Manager for the Good Judgment Project, I want to let Joshua (and anyone else interested), know that we are still taking volunteers. Our waiting list is about to be cleared again, so there’s an excellent chance that those who sign up now will be admitted to the study soon. To sign up, see http://surveys.crowdcast.com/s3/ACERegistration.

    Thumb up 0 Thumb down 0