It may have been a more volatile, if you will, Democratic race, with more change among the electorate—which makes it harder to nail down.
CJR: That raises the other possibility that the polls were right when they were taken, and that voters changed their minds.
GL: That’s the other option. But again, that would point to late deciders. And the exit poll doesn’t support that notion.
I think there are two things to keep in mind. One is the long, remarkable accuracy of pre-elections polls in predicting elections. Given that record going back many years, that makes this all the more surprising. Also, the best reason for doing pre-election polls is not to handicap the horse race, but to identify the issues that voters care about, the things that are most appealing to them and least appealing to them, to see how various voter groups are dividing. Who are the main players among voters, and why?
That’s much more important than the back and forth of who’s ahead. And if the result of the New Hampshire failure is to throttle back from the horse race, that’d be a pretty good result of some pretty bad polls.
CJR: How realistic do you think that something like that might happen based on one bad night?
GL: I must tell you, I’ve got a couple of correspondents in South Carolina and Michigan, today, and both of them called me up today to ask what the polls are showing there.
CJR: Right. The day after
GL: Survey research has a long history, and a useful history. If we in news organizations don’t go out and do these polls, and attempt to find out accurately, an in a valid, and reliable, and meaningful way what people are thinking, its not as though no one else would either. We would simply have pundits, and campaigns and interest groups doing it on their own, with perhaps dubious methods and means, and trying to spin allegedly what they’d found out.
CJR: One thing you’ve mentioned on your blog is the so-called Bradley effect, when white voters overstate support for black candidates. You’ve written while that’s a possibility, it’s also a crutch that pollsters can lean on when results go bad.
GL: I’m pretty skeptical of this notion. It goes back to a handful of bad polls many years ago. And there have been plenty of other bi-racial races since then. So if it’s not a consistent effect, I don’t think it’s really a provable effect at all. There are a lot of other places to look. And they’ll be a lot of looking done.
CJR: And do you think that sort of analysis could help prevent something like this again?
GL: You never know. Pre-election polling can be and is very complicated. It is the hardest kind of polling. Opinion polling is simple. You have a known population—everyone with a landline. Take a random sample, ask them what they think, thank them and you’re done. With pre-election polling, you’re polling for an unknown—people who are actually going to vote. They won’t exist, this population, until Election Day. Estimating who they are is the hard part.
CJR: Do you anticipate a bigger effort, beyond internal reviews by individual pollsters, to sort this out?
GL: There have been calls by our professional organization, the American Association for Public Opinion Research, to pull together some sort of review panel, to take a thorough, competent look at what’s occurred here. That’s certainly something I’d support.