Archives for predictions



Making It

In 2009, I published a post on this blog about Sarah Dooley, which ended with the words:

I predict she is going to make it big. I’m not sure how, but remember you heard it here first.

A couple of months later, I received a letter out of the blue from a legit movie producer who said he was writing to thank me for that post because from it he learned how talented Sarah is.  He wrote that he had just signed a contract with her to develop a screenplay.

Fast forward to the present, and while I can’t find any movies made with Sarah’s scripts, I was happy to hear this NPR interview with her about an album she just released.  The album is called “Stupid Things.”  My favorite song, “Gym Looks Nice,” is a school dance tone poem that is characteristically moving.  My daughter tells me that at one point, it made it into the top twenty downloads on iTunes.  Sarah’s also released a music video of her song “Peonies.” Read More »



More Predictions That Didn’t Come True

Thank you, Politico (the Magazine), for taking a look back at various predictions for 2013 to see how they worked out.

In our “Folly of Prediction” podcast, we discussed how the incentives to predict are skewed. Big, bold predictions that turn out to be true are handsomely rewarded; but predictions that turn out to be false are usually forgotten. With the cost of being wrong so low, the incentives to predict are high.

In his Politico piece called “Crystal Balderdash,” Blake Hounshell doesn’t let us forget the bad predictions. A few examples: Read More »



More Predictions, From Bad to Worse

Our “Folly of Prediction” podcast made these basic points:

Fact: Human beings love to predict the future.

Fact: Human beings are not very good at predicting the future.

Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.

A couple of recent cases in point:

The National Oceanic and Atmospheric Administration predicted a particularly bad Atlantic hurricane season this year but, thankfully, were wrong, as noted by Dan Amira in New York magazine. It is hard to imagine that many people are unhappy about that. 

Here, as noted by Ira Stoll in the New York Sun, are the picks by ESPN experts at the start of the 2013 baseball season. How bad were their picks? Read More »



Predicting the End of the Government Shutdown

Flip Pidot, co-founder and CEO of the American Civics Exchange, writes to let us know that the exchange has recently added cash prizes to their political prediction markets and is currently running two parallel government shutdown prediction markets, allowing for an interesting experiment:

At American Civics Exchange, we’ve just begun to implement cash prizes in our political prediction market (a sort of interim maneuver on our way to regulated exchange-traded futures).

For the government shutdown, we’re running two parallel markets – one in which traders buy and sell different shutdown end dates (with play money), yielding an implied odds curve and consensus prediction (below), and another in which traders simply log their best guess as to exact date and time of resolution (with no visibility into others’ guesses), with the closest prediction winning the real-money pot.

Read More »



Try Your Hand at Economic Forecasting

Think you can do a better job at predicting the economic future than all those economists and pundits?  Here’s your chance to prove it:

Members of the public are being encouraged to take on the Bank of England by betting on the U.K.’s future inflation and unemployment rates.

Free-market think tank the Adam Smith Institute on Wednesday launched two betting markets in an attempt to use the “wisdom of crowds” to beat the Bank of England’s official forecasters. Punters can place bets on what the rate of both U.K. inflation and unemployment will be on June 1, 2015.

Sam Bowman, the research director of the Adam Smith Institute, believes the new markets will “out-predict” official Bank of England predictions.  “If these markets catch on, the government should consider outsourcing all of its forecasts to prediction markets instead of expert forecasters,” he said.



Looking for a Long Shot: My Belmont Predictions

I whiffed on the Kentucky Derby and caught lightning in a bottle at the Preakness.

Let’s see if I can do it again.

All eyes are on Orb and Oxbow, the winners of the first two legs of the Triple Crown. Those two horses are likely to be heavy betting favorites in the Belmont. And according to my model, they look okay, but not attractive at the odds they will go off at.

Instead, my numbers suggest a trio of long shots are the place to put your money: Palace Malice, Overanalyze, and Golden Soul. Each of those horses should pay about 15-1 if they were to pull off an upset victory. Read More »



Redemption at the Preakness

I made a mess out of this year’s Kentucky Derby.  The worst part is that a bunch of friends placed bets using my picks, collectively losing a large stack of money.  

After the Kentucky Derby, I blogged about the misery, noting what a strange race the Derby was:

The race is 1.25 miles long and there were 19 horses in the race. Of the eight horses who were in the front of the pack after one-fourth of a mile, seven ended up finishing in back: 12th, 14th, 15th, 16th, 17th, 18th, 19th. Only one horse that trailed early also finished poorly, and that horse started terribly and was way behind the field from the beginning. In contrast, the horses who ended up doing well were in 16th, 15th, 17th, 12th, and 18th place early on in the race. Basically, there was a nearly perfect negative correlation between the order of the horses early in the race and the order of the horses at the end of the race!

Read More »



Are Predictions Getting Better?

If you’re the kind of person who cares about “The Folly of Prediction” and The Signal and the Noise, you may want to read Amy Zegart‘s Foreign Policy piece about predictions. Making predictions within the intelligence community, for example, is a different game than betting on basketball:

In March Madness, everyone has access to the same information, at least theoretically. Expertise depends mostly on how geeky you choose to be, and how much time you spend watching ESPN and digging up past stats. In intelligence, however, information is tightly compartmented by classification restrictions, leaving analysts with different pieces of data and serious barriers to sharing it. Imagine scattering NCAA bracket information across 1,000 people, many of whom do not know each other, some of whom have no idea what a bracket is or the value of the information they possess. They’re all told if they share anything with the wrong person, they could be disciplined, fired, even prosecuted. But somehow they have to collectively pick the winner to succeed.

In other spheres, however, predictions just keep getting better. “Smart people are finding clever new ways of generating better data, identifying and unpacking biases, and sharing information unimaginable 20 or even 10 years ago,” writes Zegart.