Search the Site

Guest Blog: Who’s to Blame for Inaccurate Election Polls?

A few days ago, I blogged about how pre-election polls have historically overstated a minority candidate’s standing, but how that gap seems to be shrinking. In other words, according to the Pew Research Center article I cited, people used to lie to pollsters about their willingness to vote for a minority candidate, but now they do so less often.

This is an issue I’ve spoken about at some length with Gary Langer, the director of polling at ABC News. I’ve gotten to know Gary thanks to our occasional Freakonomics work with Good Morning America, 20/20, and World News Tonight. Gary is a force of nature. He not only runs ABC’s polling but has become the network’s top cop for keeping bad data off the air, vetting many of the surveys, studies, and polls that producers and reporters plan to use in their stories. I don’t know of any other news organization that has such a resource. I am sure he is occasionally a thorn in the side of a reporter who’s dying to cite some sensationalistic study from some biased organization … but as consumers of news, we are all the better for it.

Anyway, I wrote to Gary after my polling post to seek out his wisdom. Here’s his reply. The short answer: He doesn’t buy it. The long answer: I think you’ll see from this guest post why I think Gary Langer is perhaps one of the most valuable people in American journalism today.

I’m skeptical of the notion that survey respondents lie about their voting intentions — or about much of anything else. When a pollster produces a bad estimate in a pre-election survey, blaming the respondent is too easy an out. The reality is that pre-election polling relies on accurately modeling who is or isn’t going to vote. It’s plenty likely to be bad modeling — not lying respondents — that causes the estimate to be blown. To accept that lying caused a bad estimate, we need more than a postulate; we need consistent empirical data. Six elections from 15 to 25 years ago hardly suffice. In fact, nearly all the pre-election surveys in those contests carried too many undecideds for good polling (a function of polling techniques) and were completed too far from Election Day; several also were of undetermined quality on other methodological fronts. (It’d be especially interesting to see the undecideds in these polls broken down by race and other variables).

As Pew notes, moreover, Gallup’s pre-primary poll in the 1992 Moseley Braun race actually overstated her white opponent’s support — the opposite of the postulated effect. This fall’s pre-election polls in all five of the biracial governor and U.S. Senate contests were accurate. Rather than a decline in lying, mightn’t we instead be seeing increasing sophistication in likely voter modeling, as well as improved polling techniques more generally, from sampling to interviewer training? Yet the theory of lying voters lives on.

Part of its longevity may stem from its implication of cultural superiority — the assumption that what you or I or anyone else perceives to be the “right” or politically correct answer is the one that other people will feel compelled to give. There is evidence of social desirability effects in surveys, including effects related to the perceived race of the interviewer. But given the complexity of likely voter modeling in pre-election polls, we shouldn’t be too quick to assume that lying is the root cause of bad estimates — or even that the desirable or socially correct attitude in your or my eyes will be the same in someone else’s. You’d be surprised at what people are willing to reveal about themselves in surveys, with a level of internal consistency that lends credence to the data.

I hold that claims of lying, like reports of the public’s “confusion” or “contradiction” in various attitudes, invariably reflect a failure on the researcher’s part more than confused or prevaricating respondents. But why admit that you built a bad model, asked the wrong question, asked it badly, forgot the follow-up, or just can’t figure it out, when, heck, you can just blame the respondent instead?


Comments