Our “Folly of Prediction” podcast made these basic points:
Fact: Human beings love to predict the future.
Fact: Human beings are not very good at predicting the future.
Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.
A couple of recent cases in point:
The National Oceanic and Atmospheric Administration predicted a particularly bad Atlantic hurricane season this year but, thankfully, were wrong, as noted by Dan Amira in New York magazine. It is hard to imagine that many people are unhappy about that.
Flip Pidot, co-founder and CEO of the American Civics Exchange, writes to let us know that the exchange has recently added cash prizes to their political prediction markets and is currently running two parallel government shutdown prediction markets, allowing for an interesting experiment:
Read More »
At American Civics Exchange, we’ve just begun to implement cash prizes in our political prediction market (a sort of interim maneuver on our way to regulated exchange-traded futures).
For the government shutdown, we’re running two parallel markets – one in which traders buy and sell different shutdown end dates (with play money), yielding an implied odds curve and consensus prediction (below), and another in which traders simply log their best guess as to exact date and time of resolution (with no visibility into others’ guesses), with the closest prediction winning the real-money pot.
Members of the public are being encouraged to take on the Bank of England by betting on the U.K.’s future inflation and unemployment rates.
Free-market think tank the Adam Smith Institute on Wednesday launched two betting markets in an attempt to use the “wisdom of crowds” to beat the Bank of England’s official forecasters. Punters can place bets on what the rate of both U.K. inflation and unemployment will be on June 1, 2015.
Sam Bowman, the research director of the Adam Smith Institute, believes the new markets will “out-predict” official Bank of England predictions. ”If these markets catch on, the government should consider outsourcing all of its forecasts to prediction markets instead of expert forecasters,” he said.
Let’s see if I can do it again.
All eyes are on Orb and Oxbow, the winners of the first two legs of the Triple Crown. Those two horses are likely to be heavy betting favorites in the Belmont. And according to my model, they look okay, but not attractive at the odds they will go off at.
Instead, my numbers suggest a trio of long shots are the place to put your money: Palace Malice, Overanalyze, and Golden Soul. Each of those horses should pay about 15-1 if they were to pull off an upset victory. Read More »
I made a mess out of this year’s Kentucky Derby. The worst part is that a bunch of friends placed bets using my picks, collectively losing a large stack of money.
After the Kentucky Derby, I blogged about the misery, noting what a strange race the Derby was:
Read More »
The race is 1.25 miles long and there were 19 horses in the race. Of the eight horses who were in the front of the pack after one-fourth of a mile, seven ended up finishing in back: 12th, 14th, 15th, 16th, 17th, 18th, 19th. Only one horse that trailed early also finished poorly, and that horse started terribly and was way behind the field from the beginning. In contrast, the horses who ended up doing well were in 16th, 15th, 17th, 12th, and 18th place early on in the race. Basically, there was a nearly perfect negative correlation between the order of the horses early in the race and the order of the horses at the end of the race!
If you’re the kind of person who cares about “The Folly of Prediction” and The Signal and the Noise, you may want to read Amy Zegart‘s Foreign Policy piece about predictions. Making predictions within the intelligence community, for example, is a different game than betting on basketball:
In March Madness, everyone has access to the same information, at least theoretically. Expertise depends mostly on how geeky you choose to be, and how much time you spend watching ESPN and digging up past stats. In intelligence, however, information is tightly compartmented by classification restrictions, leaving analysts with different pieces of data and serious barriers to sharing it. Imagine scattering NCAA bracket information across 1,000 people, many of whom do not know each other, some of whom have no idea what a bracket is or the value of the information they possess. They’re all told if they share anything with the wrong person, they could be disciplined, fired, even prosecuted. But somehow they have to collectively pick the winner to succeed.
In other spheres, however, predictions just keep getting better. “Smart people are finding clever new ways of generating better data, identifying and unpacking biases, and sharing information unimaginable 20 or even 10 years ago,” writes Zegart.
Answer: pretty bad! From a 1999 Journal of Business paper by Chris Avery and Judy Chevalier … Read More »
In the heat of a Presidential campaign, it can be hard to pay attention to other news. But a small-seeming story out of Italy yesterday has, to my mind, the potential to shape the future as much as a Presidential election.
As reported by ABC, the BBC, the Wall Street Journal, the New York Times, and elsewhere, an Italian court has convicted seven earthquake experts of failing to appropriately sound the alarm bell for an earthquake that wound up killing more than 300 people in L’Aquila in 2009. The experts received long prison sentences and fines of more than $10 million.(Addenum: Roger Pielke Jr. discusses the “mischaracterizations” of the verdict.)
There is of course the chance that the verdict will be thrown out upon appeal, discredited as an emotional response to a horrible tragedy. Read More »