Search the Site

Posts Tagged ‘predictions’

Making It

In 2009, I published a post on this blog about Sarah Dooley, which ended with the words:

I predict she is going to make it big. I’m not sure how, but remember you heard it here first.

A couple of months later, I received a letter out of the blue from a legit movie producer who said he was writing to thank me for that post because from it he learned how talented Sarah is.  He wrote that he had just signed a contract with her to develop a screenplay.

Fast forward to the present, and while I can’t find any movies made with Sarah’s scripts, I was happy to hear this NPR interview with her about an album she just released.  The album is called “Stupid Things.”  My favorite song, “Gym Looks Nice,” is a school dance tone poem that is characteristically moving.  My daughter tells me that at one point, it made it into the top twenty downloads on iTunes.  Sarah’s also released a music video of her song “Peonies.”



More Predictions That Didn't Come True

Thank you, Politico (the Magazine), for taking a look back at various predictions for 2013 to see how they worked out.

In our “Folly of Prediction” podcast, we discussed how the incentives to predict are skewed. Big, bold predictions that turn out to be true are handsomely rewarded; but predictions that turn out to be false are usually forgotten. With the cost of being wrong so low, the incentives to predict are high.

In his Politico piece called “Crystal Balderdash,” Blake Hounshell doesn’t let us forget the bad predictions. A few examples:



More Predictions, From Bad to Worse

Our “Folly of Prediction” podcast made these basic points:

Fact: Human beings love to predict the future.

Fact: Human beings are not very good at predicting the future.

Fact: Because the incentives to predict are quite imperfect — bad predictions are rarely punished — this situation is unlikely to change.

A couple of recent cases in point:

The National Oceanic and Atmospheric Administration predicted a particularly bad Atlantic hurricane season this year but, thankfully, were wrong, as noted by Dan Amira in New York magazine. It is hard to imagine that many people are unhappy about that. 

Here, as noted by Ira Stoll in the New York Sun, are the picks by ESPN experts at the start of the 2013 baseball season. How bad were their picks?



Predicting the End of the Government Shutdown

Flip Pidot, co-founder and CEO of the American Civics Exchange, writes to let us know that the exchange has recently added cash prizes to their political prediction markets and is currently running two parallel government shutdown prediction markets, allowing for an interesting experiment:

At American Civics Exchange, we’ve just begun to implement cash prizes in our political prediction market (a sort of interim maneuver on our way to regulated exchange-traded futures).

For the government shutdown, we’re running two parallel markets – one in which traders buy and sell different shutdown end dates (with play money), yielding an implied odds curve and consensus prediction (below), and another in which traders simply log their best guess as to exact date and time of resolution (with no visibility into others’ guesses), with the closest prediction winning the real-money pot.



Try Your Hand at Economic Forecasting

Think you can do a better job at predicting the economic future than all those economists and pundits?  Here’s your chance to prove it:

Members of the public are being encouraged to take on the Bank of England by betting on the U.K.’s future inflation and unemployment rates.

Free-market think tank the Adam Smith Institute on Wednesday launched two betting markets in an attempt to use the “wisdom of crowds” to beat the Bank of England’s official forecasters. Punters can place bets on what the rate of both U.K. inflation and unemployment will be on June 1, 2015.

Sam Bowman, the research director of the Adam Smith Institute, believes the new markets will “out-predict” official Bank of England predictions.  “If these markets catch on, the government should consider outsourcing all of its forecasts to prediction markets instead of expert forecasters,” he said.



Looking for a Long Shot: My Belmont Predictions

I whiffed on the Kentucky Derby and caught lightning in a bottle at the Preakness.

Let’s see if I can do it again.

All eyes are on Orb and Oxbow, the winners of the first two legs of the Triple Crown. Those two horses are likely to be heavy betting favorites in the Belmont. And according to my model, they look okay, but not attractive at the odds they will go off at.

Instead, my numbers suggest a trio of long shots are the place to put your money: Palace Malice, Overanalyze, and Golden Soul. Each of those horses should pay about 15-1 if they were to pull off an upset victory.



Redemption at the Preakness

I made a mess out of this year’s Kentucky Derby.  The worst part is that a bunch of friends placed bets using my picks, collectively losing a large stack of money.  

After the Kentucky Derby, I blogged about the misery, noting what a strange race the Derby was:

The race is 1.25 miles long and there were 19 horses in the race. Of the eight horses who were in the front of the pack after one-fourth of a mile, seven ended up finishing in back: 12th, 14th, 15th, 16th, 17th, 18th, 19th. Only one horse that trailed early also finished poorly, and that horse started terribly and was way behind the field from the beginning. In contrast, the horses who ended up doing well were in 16th, 15th, 17th, 12th, and 18th place early on in the race. Basically, there was a nearly perfect negative correlation between the order of the horses early in the race and the order of the horses at the end of the race!



Are Predictions Getting Better?

If you’re the kind of person who cares about “The Folly of Prediction” and The Signal and the Noise, you may want to read Amy Zegart‘s Foreign Policy piece about predictions. Making predictions within the intelligence community, for example, is a different game than betting on basketball:

In March Madness, everyone has access to the same information, at least theoretically. Expertise depends mostly on how geeky you choose to be, and how much time you spend watching ESPN and digging up past stats. In intelligence, however, information is tightly compartmented by classification restrictions, leaving analysts with different pieces of data and serious barriers to sharing it. Imagine scattering NCAA bracket information across 1,000 people, many of whom do not know each other, some of whom have no idea what a bracket is or the value of the information they possess. They’re all told if they share anything with the wrong person, they could be disciplined, fired, even prosecuted. But somehow they have to collectively pick the winner to succeed.

In other spheres, however, predictions just keep getting better. “Smart people are finding clever new ways of generating better data, identifying and unpacking biases, and sharing information unimaginable 20 or even 10 years ago,” writes Zegart.




What's Wrong With Punishing Bad Predictions?

In the heat of a Presidential campaign, it can be hard to pay attention to other news. But a small-seeming story out of Italy yesterday has, to my mind, the potential to shape the future as much as a Presidential election.

As reported by ABC, the BBC, the Wall Street Journal, the New York Times, and elsewhere, an Italian court has convicted seven earthquake experts of failing to appropriately sound the alarm bell for an earthquake that wound up killing more than 300 people in L’Aquila in 2009. The experts received long prison sentences and fines of more than $10 million.(Addenum: Roger Pielke Jr. discusses the “mischaracterizations” of the verdict.)

There is of course the chance that the verdict will be thrown out upon appeal, discredited as an emotional response to a horrible tragedy. 



Lying to Ourselves (Ep. 97)

Our latest Freakonomics Radio on Marketplace podcast is called “Lying to Ourselves.” (You can download/subscribe at iTunes, get the RSS feed, or listen via the media player in the post.) 

The episode was inspired by a recent poll I saw on Yahoo! Finance (at left).

Does anyone believe for a minute that this many people would actually leave the U.S. if taxes (whatever that means, exactly) were to rise to 40 percent or even 70 percent?



Bring Your Questions for FiveThirtyEight Blogger Nate Silver, Author of The Signal and the Noise

Nate Silver first gained prominence for his rigorous analysis of baseball statistics. He became even more prominent for his rigorous analysis of elections, primarily via his FiveThirtyEight blog. (He has also turned up on this blog a few times.)

Now Silver has written his first book, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t. I have only read chunks so far but can already recommend it. (I would like to think his research included listening to our radio hour “The Folly of Prediction,” but I have no idea.)

A section of Signal about weather prediction was recently excerpted in the Times Magazine. Relatedly, his chapter called “A Climate of Healthy Skepticism” has already been attacked by the climate scientist Michael Mann. Given the stakes, emotions, and general unpredictability that surround climate change, I am guessing Silver will collect a few more such darts. (Yeah, we’ve been there.)



Paying for "Transparently Useless Advice"

According to a new study, people do. Even when they know that the advice is useless.

Researchers Nattavudh Powdthavee and Yohanes E. Riyanto investigated why people pay for advice about the future, particularly since the future is generally unpredictable (see our  “Folly of Prediction” podcast on this topic). Their starting point:

Why do humans pay for advice about the future when most future events are predominantly random? What explains, e.g., the significant money spent in the finance industry on people who appear to be commenting about random walks, payments for services by witchdoctors, or some other false-expert setting?



Solving Problems in the Real World

I owe my favorite local bookstore, the Harvard Bookstore, for making another day for me. Wandering the tall, packed shelves on a warm and breezy evening, I ran across Schaum’s Outline of Principles of Economics. One subtitle on the cover: “964 fully solved problems.” The problems include, for example (from page 50): “True of false: As used in economics, the word demand is synonymous with need,” or “True or false: A surplus exists when the market price is above the equilibrium price.”

I didn’t long much for either answer.

Instead, as the U.S. mortgage market has, as James Kunstler predicted on October 10, 2005, imploded “like a death star” and dragged “every tradable instrument known to man into the quantum vacuum of finance that it create[d],” as euros flee from Greece, and as bank loans dry up in Spain, I wished that the 964 fully solved problems included one or two of the real problems.



Déjà Vu All Over Again

The same folks who stunned the world in 1972 with a prediction that economic growth would soon cease because of resource constraints are back again, predicting resource constraints will lead to global depression in 2030.  Growth did not end by 1990, and it will not end in 2030.  As before, prices will change to make economizing on increasingly scarce resources good business policy; and, as before, technology will change to lead businesses and consumers to substitute away from relatively scarce resources. 

The interesting question is why this same nonsense continues to get so much attention.  Is it that people forget the absurdities of the past arguments? Or do we have a substantial, never-satisfied demand for schadenfreude? Regardless, this stuff is just as bad economics as it was when The Limits of Growth first appeared.



In Defense of Two-Handed Economists

My latest Bloomberg View column with Betsey Stevenson is now online:

Here’s something you don’t often hear an economist admit: We have very little idea where the economy will be next year.

Truth be told, our best guesses just aren’t very good. Government forecasts regularly go awry. Private-sector economists and cutting-edge macroeconomic models do even worse.

Our objective isn’t to beat up economists. Rather, we want to make the point that when we recognize our shortcomings, we’re forced to confront the enormous uncertainty that lies ahead.  And appropriate humility about the economy changes how we think about policy.



False Positive Science: Why We Can't Predict the Future

This is a guest post from , a professor of environmental studies at the University of Colorado at Boulder. Check out Pielke‘s blogs for more on the perils of predicting and “false positive science.”

Sports provide a powerful laboratory for social science research. In fact, they can often be a better place for research than real laboratories because sports provide a controlled setting in which people make frequent, real decisions, allowing for the collection of copious amounts of data. For instance, last summer, Daniel Hamermesh and colleagues used a database of more than 3.5 million pitches thrown in major league baseball games from 2004-2008 to identify biases in umpire, batter, and pitcher decision making. Similarly, Devin Pope and Maurice Schweitzer from the Wharton School used a dataset of 2.5 million putts by PGA golfers over five years to demonstrate loss aversion – golfers made more of the same-length putts when putting for par or worse than for birdie or better.  Such studies tell us something about how we behave and make decisions in settings outside of sports as well.



Who Will Win the Most Medals in the 2012 Summer Olympics?

Dan Johnson, an economist at Colorado College, has been predicting Olympic medal counts for years with a model that uses metrics like population count, income per capita, and home-country advantage. In the past six Olympics, his model has a correlation of 93 percent between predictions and actual medal counts, and 85 percent for gold medals.

For the Games in London this summer, Johnson predicts that the U.S. Will be the top medal winner, followed by China, Russia, then Britain — the same order they finished in the 2008 Beijing Olympics.



The Least Fun Way to Predict a Super Bowl Winner

From Elizabeth Stanton at Bloomberg:

The New England Patriots will win the Super Bowl by at least three points even though the New York Giants have the appeal of “a cocktail party stock,” according to a quantitative money management firm that’s correctly picked the team covering the point spread for eight consecutive years.

Analytic Investors LLC in Los Angeles has documented a tendency on the part of Super Bowl bettors to overestimate the chances of the team that rewarded them more during the regular season — the team with the higher alpha, in investment parlance. In 2008, that was the favored Patriots, who lost to the Giants 17-14. This year, it’s New York.

“Everyone thinks the Giants are rolling right now, a lot of people in my office even,” said Matthew Robinson, a portfolio analyst for global and Japanese equities at Analytic and the author of this year’s analysis. “They like the Giants, but they have faith in the model as well.”

On the other hand, do I label this “the least fun way” because I have a Giants bias and am blind to my blindness?

At least this is less ridiculous than the Super Bowl Indicator.



The Folly of Prediction, Cont'd.

Our “Folly of Prediction” podcast included an interview with Joe Prusacki, who directs the statistics division at the USDA’s National Agricultural Statistics Service. This means he helps make crop forecasts (read a primer here). As hard as the USDA works, the fact is that predicting the future of even something as basic as crop yield can be maddeningly difficult. The Wall Street Journal has the latest in an article headlined “Erroneous Forecasts Roil Corn Market“:

Government reports about the U.S. corn crop have become increasingly unreliable of late, contributing to wild swings in corn prices, a Wall Street Journal analysis shows.

Over the past two years, the Department of Agriculture’s monthly forecasts of how much farmers will harvest have been off the mark to a greater degree than any other two consecutive years in the last 15, according to a Journal analysis of government data. This year’s early-season forecasts also appear to have been way off. The next monthly report is due on Friday.



The Downside of Research: How Small Uncertainties Can Lead to Big Differences

Contrary to popular perception, most research yields very few conclusions with 100 percent certainty. That’s why you’ll often hear economists state their conclusions with “95 percent certainty.” It means they’re pretty sure, but there’s still a small margin for error. The science of climate change is no different, and, according to a Washington Post blog post, scientists are currently struggling with how to explain that uncertainty to the public. “What do you do when there’s a small but real chance that global warming could lead to a catastrophe?” asks Brad Plumer. “How do you talk about that in a way that’s useful to policymakers?”



The 21st Century Another American Century? Don't Bet on It

In conjunction with our latest Freakonomics Radio podcast, “The Folly of Prediction,” I decided to reach out to a former professor of mine, Raymond Horton, whose modern political economy class is a student favorite at Columbia Business School. I wanted to know what Horton thought the worst prediction ever was, particularly regarding the intersection of politics and economics. He immediately pointed to a Foreign Affairs essay written by Mortimer Zuckerman in 1998, in which Zuckerman boldly lays out the case that, like the 20th century, the 21st will also be marked by American dominance.
We’re barely a decade into the new century, so you may think it’s too early to pass judgment on Zuckerman’s prediction. But given the way things have played out over the last several years, it does look to be on shaky ground. At least that’s the opinion of Ray Horton.
Once you’ve finished reading Horton’s essay, we’d love to hear what you think count as some of the worst predictions ever.



An Algorithm that Can Predict Weather a Year in Advance

In our latest podcast, “The Folly of Prediction,” we poke fun at the whole notion of forecasting. The basic gist is: whether it’s Romanian witches or Wall Street quant wizards, though we love to predict things — we’re generally terrible at it. (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.)
But there is one emerging tool that’s greatly enhancing our ability to predict: algorithms. Toward the end of the podcast, Dubner talks to Tim Westergren, a co-founder of Pandora Radio, about how the company’s algorithm is able to predict what kind of music people want to hear, by breaking songs down to their basic components. We’ve written a lot about algorithms, and the potential they have to vastly change our life through customization, and perhaps satisfy our demand for predictions with some robust results.
One of the first things that comes to mind when people hear the word forecasting is the weather. Over the last few decades, we’ve gotten much better at predicting the weather. But what if through algorithms, we could extend our range of accuracy, and say, predict the weather up to a year in advance? That’d be pretty cool, right? And probably worth a bit of money too.
That’s essentially what the folks at a small company called Weather Trends International are doing. The private firm based in Bethlehem, PA, uses technology first developed in the early 1990s, to project temperature, precipitation and snowfall trends up to a year ahead, all around the world, with more than 80% accuracy.



Italian Seismologists Charged with Manslaughter for Not Predicting Earthquake

In our latest Freakonomics Radio podcast, “The Folly of Prediction,” we talk about the incentives behind making predictions, and how wrong predictions often go unpunished, which is why people make so many of them.
But recent news out of Italy seems to take the premise of punishing bad predictions a bit too far. From the New York Times:

Seven Italian seismologists and scientists went on trial on manslaughter charges on Tuesday, accused of not adequately warning residents of a central Italian region before an earthquake that killed 309 people in April 2009. Prosecutors say that the seven defendants, members of a national panel that assesses major risks, played down the risk of a major earthquake’s occurring even though there had been significant seismic activity near L’Aquila, the capital of the Abruzzo region, in the months before the quake.



Am I Good Enough to Compete In a Prediction Tournament?

Last spring, we posted on Phil Tetlock’s massive prediction tournament: Good Judgment. You might remember Tetlock from our latest Freakonomics Radio podcast, “The Folly of Prediction.” (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.)
Tetlock is a psychologist at the University of Pennsylvania, well-known for his book Expert Political Judgment, in which he tracked 80,000 predictions over the course of 20 years. Turns out that humans are not great at predicting the future, and experts do just a bit better than a random guessing strategy.



Picking the NFL Playoffs: How the Experts Fumble the Snap

Our latest Freakonomics Radio podcast, “The Folly of Prediction,” is built around the premise that humans love to predict the future, but are generally terrible at it. (You can download/subscribe at iTunes, get the RSS feed, or read the transcript here.) But predictions about world politics and the economy are hard — there are so many moving parts.
In the podcast, you’ll hear from Freakonomics researcher Hayes Davenport, who ran the numbers for us on how accurate expert NFL pickings have been for the last 3 years. He put together a guest post for us on football predictions.
Picking the NFL Playoffs: How the Experts Fumble the Snap
As careers in journalism go, making preseason NFL predictions is about as safe as they come these days. The picks you make in August can’t be reviewed for four months, and by that time almost nobody remembers or cares what any individual picker predicted. So when Stephen asked me to look at the success rate of NFL experts in predicting division winners at the beginning of the season, I was excited to look back at the last few years of picks and help offer this industry one of its first brushes with accountability.



Freakonomics Poll: When It Comes to Predictions, Whom Do You Trust?

Our latest Freakonomics Radio podcast, “The Folly of Prediction,” is built around the premise that humans love to predict the future, but are generally terrible at it. (You can download/subscribe at iTunes, get the RSS feed, listen live via the media player above, or read the transcript here.)
There are a host of professions built around predicting some future outcome: from predicting the score of a sports match, to forecasting the weather for the weekend, to being able to tell what the stock market is going to do tomorrow. But is anyone actually good at it?



This Week in Corn Predictions: The USDA Got it Right (Almost)

We’ve been having some fun recently at the expense of people who like to predict things. In our hour-long Freakonomics Radio episode “The Folly of Prediction” — which will be available as a podcast in the fall — we showed that humans are lousy at predicting just about anything: the weather, the stock market, elections. In fact, even most experts are only nominally better than a coin flip at determining a future outcome. And yet there remains a huge demand for professional predictors and forecasters.
Earlier this week, Stephen Dubner and Kai Ryssdal chatted about this on the Freakonomics Radio segment on Marketplace. The question remains: “should bad predictions be punished?
As mentioned in the segment, the U.S. Department of Agriculture’s August crop yield report came out today. The result? Not bad actually. The corn yield forecast was revised downward by just 1.3% from its estimate last month. That’s a considerable improvement over last year’s big miss, when the August corn yield report had to be revised downward by almost 7%.



Should Bad Predictions Be Punished?

What do Wall Street forecasters and Romanian witches have in common? They usually get away, scot-free, with making bad predictions. Our world is awash in poor prediction — but for some reason, we can’t stop, even though accuracy rates often barely beat a coin toss.
But then there’s the U.S. Department of Agriculture’s crop forecasting. Predictions covering a big crop like corn (U.S. farmers have planted the second largest crop since WWII this year) usually fall within five percent of the actual yield. So how do they do it? Every year, the U.S.D.A. sends thousands of enumerators into cornfields across the country where they inspect the plants, the conditions, and even “animal loss.”
This week on Marketplace, Stephen J. Dubner and Kai Ryssdal talk about the supply and demand of predictions. You’ll hear from Joseph Prusacki, the head of U.S.D.A’s Statistics Division, who’s gearing up for his first major crop report of 2011 (the street is already “sweating” it); Phil Friedrichs, who collects cornfield data for the USDA; and our trusted economist and Freakonomics co-author Steven Levitt.



Surprise, Surprise: The Future Remains Hard to Predict

“There is a huge discrepancy between the data and the forecasts.”
In what realm do you think this “huge discrepancy” exists? The financial markets? Politics? Pharmaceutical research?
Given how bad humans are at predicting the future, this discrepancy could exist just about anywhere. But the above quote, from the University of Alabama-Huntsville climate scientist Roy Spencer, is talking about computer models that predict global warming: