A gentleman named J.D. Eggleston recently wrote to us with a rather interesting report, a nice piece of D.I.Y. Freakonomics concerning the accuracy of local T.V. weather forecasts. I thought it was interesting enough to post in its entirety here on the blog, and I hope you agree. Before we get to the report itself, here is a little background information from Eggleston himself:
I live with my wife and two kids, 15 and 12, in rural northwest Missouri. I earned a bachelor’s in electronics engineering technology from DeVry University in 1987. I’ve been an electronics engineer, software engineer, and for the past 13 years I’ve owned and operated a consumer electronics retail business.
I’ve always loved math and statistics and the information that can be learned from studying them. Growing up, I was told that people like me were called “weird.” Since reading Freakonomics, I now know they are called “economists.” It’s good to know I’m not alone.
The forecasting study began in April of 2007 when my fifth-grade daughter was given a school assignment to monitor the temperature and rainfall at our home for a week. Our family members are big T.V. watchers, and our house is loaded with the latest D.V.R. (digital video recorder) technology. So we decided to document not only the weather results at our home, but also to record the 10 p.m. newscasts for channels 4, 5, 9, and 41 and compare our home results to those reported by the Kansas City T.V. stations.
And while we were at it, we decided to also document each station’s weather predictions and compare them to the actual results to see if one station was better than the others. For a non-T.V. weather source, we also recorded the predictions of the federal government’s National Weather Service each evening.
And now for the report. The takeaway message? Do not plan your weekend activities based on the T.V. weather forecasts unless it is already Thursday — but waiting until Friday would be even better.
How Valid Are T.V. Weather Forecasts?
A Guest Post
By J.D. Eggleston
The authors of Freakonomics posed the question, “Do real estate agents have your best interests at heart?” Then they statistically showed they (the real estate agents) do not. So what about meteorologists? How accurate are their forecasts? Do they even care?
A seven-month study of weather forecasting at Kansas City television stations was conducted over 220 days, from April 22 to November 21, 2007. The seven-day forecasts for both high temperature and P.O.P. (probability of precipitation) for each station’s 10 p.m. telecast and from the N.O.A.A. Web site were recorded. For stations that did not offer a P.O.P. in the form of percent likelihood, the best impression of percent likelihood that could be inferred from the meteorologists’ words and graphics were used. The results of Kansas City’s high temperature and rainfall as reported at the K.C.I. airport weather station — which are the data that become the official record for weather at Kansas City — were also recorded. Those results were then compared to the high temperature and P.O.P. predictions to determine forecasting accuracy for each source for each of the seven days predicted.
The results were quite enlightening, as were some of the comments of the local meteorologists and their station managers. Here a few of the quotes we received:
“We have no idea what’s going to happen [in the weather] beyond three days out.”
“There’s not an evaluation of accuracy in hiring meteorologists. Presentation takes precedence over accuracy.”
“All that viewers care about is the next day. Accuracy is not a big deal to viewers.”
All of the chief meteorologists were asked, “How close does your high-temperature prediction have to be to the actual temperature for you to feel like you did a good job?”
Without exception, all of the meteorologists answered, “within three degrees.”
The chart above shows the results of the stations’ temperature prediction accuracy for their full seven-day forecasts. For next day predicting (one day out), all stations met their “within three degrees” goal. For two days out, all but one was within three degrees. But for three days out and beyond, none of the forecasters met their three-degree benchmark, and in fact get linearly worse each day.
The conclusion to be drawn here is not so much that one station is better than another, since all of them seem to be similar in accuracy — and most people won’t alter their plans based on a couple degrees of temperature. Rather, all of our stations did not do a good job by their own definition of plus/minus three degrees beyond two days out.
Getting It Right the First Time
When we get our first predictions for, say, June 13th, it will be the seventh day of a seven-day forecast made on June 6th. The following day, it will be the sixth day out, then the fifth, then fourth, and so on until it is tomorrow’s forecast.
Have you ever noticed that the prediction for a particular day keeps changing from day to day, sometimes by quite a bit? The graph above shows how much the different stations change their minds about their own forecasts over a seven-day period.
On average, N.O.A.A. is the most consistent, but even they change their mind by more than six degrees and 23 percent likelihood of precipitation over a seven-day span.
The Kansas City television meteorologists will change their mind from 6.8 to nearly nine degrees in temperature and 30 percent to 57 percent in precipitation, showing a distinct lack of confidence in their initial predictions as time goes on.
The prize for the single most inconsistent forecast goes to Channel 5′s Devon Lucie who on Sunday, September 30th predicted a high temperature of 53 degrees for October 7th, and seven days later changed it to 84 degrees — a difference of 31 degrees! It turned out to be 81 that day.
A close second was Channel 4′s Mike Thompson‘s initial prediction of 83 for October 15th, which he changed to 53 just two days later. It turned out to be 64 on the 15th.
Even more conclusively than the temperature accuracy graph, this prediction variance graph shows that 21st century meteorology is not developed enough to provide a week of accurate temperature forecasting.
Meteorologists take a blind stab at what the high temperature and rain possibilities might be seven days out, and then adjust their predictions on the fly as the week goes on. As mentioned earlier, one meteorologist told us: “We have no idea what’s going to happen beyond three days out.”
Will It Rain?
Precipitation will affect the average person’s plans more significantly than temperature. We rely on meteorologists to be accurate in their rainfall predictions so we can plan the events of our lives. Parades, gardening, ball games, outdoor work, car washing, construction work and farming are all affected — positively or negatively — by rain.
We could just assume it will not rain, but it would be nice to have a little heads-up. In measuring precipitation accuracy, the study assumed that if a forecaster predicted a 50 percent or higher chance of precipitation, they were saying it was more likely to rain than not. Less than 50 percent meant it was more likely to not rain.
That prediction was then compared to whether or not it actually did rain, where “rain” is defined as one-tenth of an inch or more of rainfall reported at K.C.I. Anything less than that is so irrelevant, it would likely make no difference in people’s lives.
The graph above shows that stations get their precipitation predictions correct about 85 percent of the time one day out and decline to about 73 percent seven days out.
On the surface, that would not seem too bad. But consider that if a meteorologist always predicted that it would never rain, they would be right 86.3 percent of the time. So if a viewer was looking for more certainty than just assuming it will not rain, a successful meteorologist would have to be better than 86.3 percent. Three of the forecasters were about 87 percent at one day out — a hair over the threshold for success.
Other than that, no forecaster is ever better than just assuming it won’t rain. If you think that’s bad, sadly it gets worse:
The data for the precipitation accuracy graph was taken from all days of the study. For many of those summer days it was clearly obvious there would be no rain, and thus those days were no challenge for the meteorologists. A better measure of a forecaster’s skill would be to exclude the days when there was clearly no chance of rain. After all, if you wanted to measure a golfer’s putting skill, you would not have him putt his test putts from only six inches away from the cup. You would want to challenge him with putts from five to fifteen feet — putts that could readily be made or missed.
For that type of meteorologist test, we only included the days that it either rained or the meteorologist predicted it would rain, thus eliminating the days where it clearly was not going to rain. The following graph shows the results.
Because conditions for rain on these days were more likely and more challenging to predict, we lowered our benchmark for success on this test from 86.3 percent to 50 percent. Sadly, four of the five stations topped the 50 percent goal only on their next-day forecast.
For all days beyond the next day out, viewers would be better off flipping a coin to predict rainfall than trusting the stations on days where rain was possible. Oddly, N.O.A.A. — which had been one of the better forecasters in our other evaluations — was the worst in this one, especially when predicting three days out and beyond.
When N.O.A.A. meteorologist Noelle Runyan was questioned about this, she stated, “Our forecasts are more conservative than the television stations. We raise our P.O.P. predictions to over 50 percent only when we are sure of rain.” This statement and the data above are another illustration of how — with the data and tools given to them — today’s meteorologists cannot confidently predict the weather beyond three days out.
Have you ever wondered if the forecast you get from the weekend meteorologist or vacation replacement is as good as from the chief meteorologist? Many people do, so on July 5th, comparisons of the accuracy of each station’s chief meteorologist to their weekend replacements were made.
Because this comparison did not start to be made until July 5th, the numbers shown in the table below may not match the numbers published for station-to-station comparison in other parts of this report. For each of the lines below, the top name is a station’s chief meteorologist, and the second line is their back up. Here is how the individual meteorologists fared.
At Channel 4, Mike Thompson’s weekend man is Joe Lauria. From the table above, we can see that Lauria is actually much better than Thompson in temperature accuracy from about .5 to 2.5 degrees better across the seven-day range. Regarding precipitation, Thompson is slightly better than Lauria one or two days out, but Lauria is more accurate three to seven days out, and on the challenging days.
At Channel 5, Katie Horner‘s weekend replacement is Devon Lucie. As with Channel 4, it appears Channel 5′s weekend forecasts are more accurate for both temperature and precipitation, but only slightly.
At Channel 9, Pete Grigsby is the weekend man for Bryan Busby. Here, Busby is better at precipitation and at one to three days out on temperature. Grigsby is better four to seven days out on temperature.
Channel 41′s weekend weatherman is Jeremy Nelson. When it comes to temperature, Nelson is not as good as Lezak one or two days out, but is better than Lezak longer range. For precipitation, both are pretty even.
The New and Improved Weather
Back in the 1990′s in an episode of the television show L.A. Law, a nerdy but effective meteorologist sued his former employer because they fired him and hired a comedian to do the weather. While none of Kansas City’s meteorologists are uneducated, stand-up comics, there does seem to be an unfortunate emphasis of style over substance.
When station managers were asked about this, one said, “There’s not an evaluation of accuracy in hiring meteorologists. Presentation takes precedence over accuracy.” And when discussing accuracy (or the lack thereof) of a seven-day forecast, another station manager stated, “All viewers care about is the next day. Accuracy is not a big deal to viewers.”
When weather events occur that really are news — flooding, tornadoes, ice storms — all of the Kansas City meteorologists do an excellent job of informing their viewers, as do most forecasters across the country. Likewise, the stations allow their meteorologists ample time to report these serious weather events, be it in their 5, 6, or 10 p.m. telecasts, or by interrupting regular programming when necessary.
One of the two major weaknesses in television meteorology today is the “non-event” days — the boring, run-of-the-mill days when no significant weather events are upcoming. It is unfortunate that 13 percent of each news telecast (actually about 20 percent if you discount the commercials) is dedicated to a weather forecast that is mostly time-consuming fluff.
The meat of such forecasts could easily be condensed to one minute or less, or maybe even a crawl at the bottom of the screen that runs for the full telecast. Reduction of the weather segment on days when there is no weather news would allow for more thorough reporting of world, national, and local news.
The other major weakness is that ratings drive television. Sadly, the data show that stations are so consumed with ratings that accuracy in weather predictions takes an irrelevant back seat to snappy patter and charm. When directly asked if accuracy mattered in forecasting, every station manager and meteorologist said it did. But when asked what steps they had taken to measure and ensure accuracy, they were without answers.
No meteorologist or television station kept records of what they predicted, nor compared their predictions to actual results over a long term. No meteorologist posts their accuracy statistics on their résumé. No station managers use accuracy statistics in the hiring or evaluation of their meteorologists.
Instead, the focus is on charm, charisma, and presentation. Their words say they care about accuracy, but their actions say they do not. Yet, they wish to continue providing inaccurate seven-day forecasts that are no more than a semi-educated shot in the dark because a) their competitors do and b) they can get away with it since they think the public does not know how inaccurate they are.
Until the public demands change in the form of lost ratings from this hollow practice of “placebo forecasting,” T.V. weather forecasts will continue to blow smoke up our … upper-level-lows.
Until this change comes to pass, we must take what we see on T.V. with a grain (or perhaps block) of salt. And if you really want to know what weather will occur in Kansas City tomorrow, find out what happened in Denver today.