Guest Blog: Who’s to Blame for Inaccurate Election Polls?

A few days ago, I blogged about how pre-election polls have historically overstated a minority candidate’s standing, but how that gap seems to be shrinking. In other words, according to the Pew Research Center article I cited, people used to lie to pollsters about their willingness to vote for a minority candidate, but now they do so less often.

This is an issue I’ve spoken about at some length with Gary Langer, the director of polling at ABC News. I’ve gotten to know Gary thanks to our occasional Freakonomics work with Good Morning America, 20/20, and World News Tonight. Gary is a force of nature. He not only runs ABC’s polling but has become the network’s top cop for keeping bad data off the air, vetting many of the surveys, studies, and polls that producers and reporters plan to use in their stories. I don’t know of any other news organization that has such a resource. I am sure he is occasionally a thorn in the side of a reporter who’s dying to cite some sensationalistic study from some biased organization … but as consumers of news, we are all the better for it.

Anyway, I wrote to Gary after my polling post to seek out his wisdom. Here’s his reply. The short answer: He doesn’t buy it. The long answer: I think you’ll see from this guest post why I think Gary Langer is perhaps one of the most valuable people in American journalism today.

I’m skeptical of the notion that survey respondents lie about their voting intentions — or about much of anything else. When a pollster produces a bad estimate in a pre-election survey, blaming the respondent is too easy an out. The reality is that pre-election polling relies on accurately modeling who is or isn’t going to vote. It’s plenty likely to be bad modeling — not lying respondents — that causes the estimate to be blown. To accept that lying caused a bad estimate, we need more than a postulate; we need consistent empirical data. Six elections from 15 to 25 years ago hardly suffice. In fact, nearly all the pre-election surveys in those contests carried too many undecideds for good polling (a function of polling techniques) and were completed too far from Election Day; several also were of undetermined quality on other methodological fronts. (It’d be especially interesting to see the undecideds in these polls broken down by race and other variables).

As Pew notes, moreover, Gallup’s pre-primary poll in the 1992 Moseley Braun race actually overstated her white opponent’s support — the opposite of the postulated effect. This fall’s pre-election polls in all five of the biracial governor and U.S. Senate contests were accurate. Rather than a decline in lying, mightn’t we instead be seeing increasing sophistication in likely voter modeling, as well as improved polling techniques more generally, from sampling to interviewer training? Yet the theory of lying voters lives on.

Part of its longevity may stem from its implication of cultural superiority — the assumption that what you or I or anyone else perceives to be the “right” or politically correct answer is the one that other people will feel compelled to give. There is evidence of social desirability effects in surveys, including effects related to the perceived race of the interviewer. But given the complexity of likely voter modeling in pre-election polls, we shouldn’t be too quick to assume that lying is the root cause of bad estimates — or even that the desirable or socially correct attitude in your or my eyes will be the same in someone else’s. You’d be surprised at what people are willing to reveal about themselves in surveys, with a level of internal consistency that lends credence to the data.

I hold that claims of lying, like reports of the public’s “confusion” or “contradiction” in various attitudes, invariably reflect a failure on the researcher’s part more than confused or prevaricating respondents. But why admit that you built a bad model, asked the wrong question, asked it badly, forgot the follow-up, or just can’t figure it out, when, heck, you can just blame the respondent instead?


pparkman

Hmm, I wonder if ABC is at a competitive disadvantage in the news race because their pollster is so rational. Since it seems that sensationalism draws the big numbers in TV news, reasoned and intelligent input into the content should reduce earnings.

Anyone care to take a poll?

-- in all seriousness, if every news source subjected the poll numbers that constantly submerge us to this kind of scrutiny - we'd get attacked by many fewer polls.

Mango

I was thinking the same thing, pparkman. But maybe if all serious news outlets started insisting on sound and accurate figures and arguments, it would drive the masses into the arms of the Weekly World News and National Inquirer.

Maybe we need the sensationalism to keep people paying attention to reality.

Stuart Arm

To accept that lying caused a bad estimate, we need more than a postulate; we need consistent empirical data.

A very good point, and one that needs to be made. However
It's plenty likely to be bad modeling — not lying respondents — that causes the estimate to be blown.
is equally unsupported by the evidence. The conclusion for this article should be "we don't know what causes this problem."

Of course, Gary Langer is trying to improve polling models, and that's a great objective. But it's possible this has caused him to see flaws in models where the flaws may not exist.

egretman

I hate it when Dubner gets an expert to talk to us rather than just letting us speculate about things we know nothing about.

Kent

I imagine it might be feasible to use both prediction markets as well as simple statistical models to back out if market pricing implies future voters are misrepresenting their intentions to pollsters.

For example, based on history, model eventual vote count against current poll figures + fund-raising total. Use that model with current data to project vote totals and then compare with TradeSports. If this exercise is done well, then a disconnect could plausibly reflect that in the eyes of traders current poll data is misaligned from how people will eventually vote (or a racial bias in betting patterns). This method will become more accurate as the election approaches and control variables like fund-raising totals can be removed.

Nat9981

It's important to distinguish between pre-election and post-election polls. For example, pre-election polls correctly predicted that Michigan voters would endorse a proposition that outlawed affirmative action in state hiring and student selection at state universities: the vote count was 58% for, 42% against.
We at the Michigan State Office for Survey Research did a post-election poll that registered 58% of those who said they remembered their vote, six months earlier, saying they had voted AGAINST the measure. The largest single reponse, though, was "I voted on the issue, but I don't remember how I voted."

don Lavespere

First time reader!I found your web-site from the
Drudge Report which I read, glance through every
day.I like what I've seen and I'll try to catch you everyday.
Thank you Don L.
Lavawaxer@yahoo,com

James Tyler

It is like the answer the thoughtful husband gives when his wife asks the famous question - "Does this dress make me look fat?" Some people are just too embarassed to admit their prejudices. Or even worse they may not even realize that they are prejudice.
I remember an incident that woke me up. After a long running disagreement with my Asian Landlady I found myself one day looking with disdain at a perfect stranger on a bus. It took me a few seconds to understand my hidden feelings. I'm am thankfull to have come to the realization that I had a few issues to work out. I am willing to bet that we all have deep rooted cobwebs developed over the years of exposure to media images. Things that we may be un-aware of.

Tanya

Indeed, if the first dimension were represented by a visible curvature only, it would be conceivable. Thus for every not plane continuum we can substitute a plane continuum of more dimensions

Raquel

One possible way to check is to look at a country where voting is compulsory and then there is no problems in forecasting who is going to vote. I suggest you look at brazilian data and we do have some respectful pollsters. Then you have to assume that brazilians are similar to americans with respect to lying on polls.

pparkman

Hmm, I wonder if ABC is at a competitive disadvantage in the news race because their pollster is so rational. Since it seems that sensationalism draws the big numbers in TV news, reasoned and intelligent input into the content should reduce earnings.

Anyone care to take a poll?

-- in all seriousness, if every news source subjected the poll numbers that constantly submerge us to this kind of scrutiny - we'd get attacked by many fewer polls.

Mango

I was thinking the same thing, pparkman. But maybe if all serious news outlets started insisting on sound and accurate figures and arguments, it would drive the masses into the arms of the Weekly World News and National Inquirer.

Maybe we need the sensationalism to keep people paying attention to reality.

Stuart Arm

To accept that lying caused a bad estimate, we need more than a postulate; we need consistent empirical data.

A very good point, and one that needs to be made. However
It's plenty likely to be bad modeling - not lying respondents - that causes the estimate to be blown.
is equally unsupported by the evidence. The conclusion for this article should be "we don't know what causes this problem."

Of course, Gary Langer is trying to improve polling models, and that's a great objective. But it's possible this has caused him to see flaws in models where the flaws may not exist.

egretman

I hate it when Dubner gets an expert to talk to us rather than just letting us speculate about things we know nothing about.

Kent

I imagine it might be feasible to use both prediction markets as well as simple statistical models to back out if market pricing implies future voters are misrepresenting their intentions to pollsters.

For example, based on history, model eventual vote count against current poll figures + fund-raising total. Use that model with current data to project vote totals and then compare with TradeSports. If this exercise is done well, then a disconnect could plausibly reflect that in the eyes of traders current poll data is misaligned from how people will eventually vote (or a racial bias in betting patterns). This method will become more accurate as the election approaches and control variables like fund-raising totals can be removed.

Nat9981

It's important to distinguish between pre-election and post-election polls. For example, pre-election polls correctly predicted that Michigan voters would endorse a proposition that outlawed affirmative action in state hiring and student selection at state universities: the vote count was 58% for, 42% against.
We at the Michigan State Office for Survey Research did a post-election poll that registered 58% of those who said they remembered their vote, six months earlier, saying they had voted AGAINST the measure. The largest single reponse, though, was "I voted on the issue, but I don't remember how I voted."

don Lavespere

First time reader!I found your web-site from the
Drudge Report which I read, glance through every
day.I like what I've seen and I'll try to catch you everyday.
Thank you Don L.
Lavawaxer@yahoo,com

James Tyler

It is like the answer the thoughtful husband gives when his wife asks the famous question - "Does this dress make me look fat?" Some people are just too embarassed to admit their prejudices. Or even worse they may not even realize that they are prejudice.
I remember an incident that woke me up. After a long running disagreement with my Asian Landlady I found myself one day looking with disdain at a perfect stranger on a bus. It took me a few seconds to understand my hidden feelings. I'm am thankfull to have come to the realization that I had a few issues to work out. I am willing to bet that we all have deep rooted cobwebs developed over the years of exposure to media images. Things that we may be un-aware of.

Tanya

Indeed, if the first dimension were represented by a visible curvature only, it would be conceivable. Thus for every not plane continuum we can substitute a plane continuum of more dimensions

Raquel

One possible way to check is to look at a country where voting is compulsory and then there is no problems in forecasting who is going to vote. I suggest you look at brazilian data and we do have some respectful pollsters. Then you have to assume that brazilians are similar to americans with respect to lying on polls.