campaign desk

The Polls: What the #$!% Happened?

ABC's pollster surveys the wreckage
January 9, 2008

Polls taken after the Iowa caucus and before the New Hampshire primary consistently showed Barack Obama beating Hillary Clinton—by as many as 13 percentage points. Today, not so much.

“It’s a really big deal. I’ve never seen anything like it and I think anyone would be hard pressed to find another collapse this big,” says Gary Langer, the director of ABC’s polling unit—which didn’t poll before New Hampshire. CJR asked him what went wrong, how pollsters might solve the riddle, and whether or not this incident might change the way journalists treat polls.

CJR: What could possibly explain this? You have nine different organizations with nine different screens.

GL: There are a variety of possibilities—in the quality of the sample, in overstatements of enthusiasm from Obama supporters…

CJR: So you mean more people identifying themselves as Obama voters than actually voted?

GL: Which is possible. But the blame for that, if that was the cause, doesn’t go to the respondents for being enthusiastic. It goes to the likely voter modeling, for failing to accurately select true likely voters.

Sign up for CJR's daily email

There are other possibilities. These polls were done largely over the weekend. Saturday is a dreadful day for conducting survey research. Sunday is not great, during the day—Sunday night is a good night for interviews. Two-day polls aren’t great in terms of methodology.

CJR: But all those things were true of the Republican polls too.

GL: That’s true. The Republican race looks to be pretty good. So I’m not sure what went on in the Democratic race. All of the estimates in the Democratic race—there were nine polls released on Monday and Tuesday—everyone of them had Obama in the lead.

There was even a national poll by Gallup which, son of a gun, showed Clinton and Obama tied. That’s the best Obama’s ever done in a national poll. So there was something going on in Obama’s favor.

CJR: Unless that poll’s wrong too.

GL: [Chuckles] That’s possible. There looks to have been sort of a systemic failure. Now we’ve got to take some real careful evaluation to get this figured out.

CJR: So it really is a mystery.

GL: At the moment, it is. But look, there’s a lot of data, and there’s a lot of good analysis that we’ll be able to do around this.

CJR: So where should we be looking?

GL: Another blog I saw today suggested it was the late deciders. That’s a common answer of faulty final pre-election polls, and it’s one I don’t buy, and it’s one I certainly don’t buy in this case. Because if we look at the exit poll results, indeed if we take out everyone who decided on Election Day, we get a result of Clinton plus four, which is of course, exactly what her margin was. If we look at who did decide on Election Day, it’s Clinton plus three, which is within polling tolerances.

Polling can go bad for a variety of reasons, particularly pre-election polling, where you are trying to measure an unknown population. People haven’t voted yet. You have to estimate who is going to turn out. And when pre-election polls are wrong, the first place I would suggest to look is in the likely voter estimates, in the way they tried to decide who was going to turn out.

It might have perhaps been over-done enthusiasm for Obama voters. After the win in Iowa, a lot of people were so charged up to vote for him but maybe didn’t really make it. Maybe it unanticipated a particularly active get-out-the-vote drive by Clinton’s supporters. It’s hard to say. It’s going to take a careful review.

CJR: Where would you begin?

GL: By looking at the estimates of turnout that each of these polls reached—what sort of electorate did they anticipate would vote at the primary, and how did that actually compare with the reality? Then look at the individual figures: what share of the Democratic turnout did they anticipate—older men, and younger people, and all the various groups? Looking at the subgroups will move us a long way towards understanding what went wrong.

It may have been a more volatile, if you will, Democratic race, with more change among the electorate—which makes it harder to nail down.

CJR: That raises the other possibility that the polls were right when they were taken, and that voters changed their minds.

GL: That’s the other option. But again, that would point to late deciders. And the exit poll doesn’t support that notion.

I think there are two things to keep in mind. One is the long, remarkable accuracy of pre-elections polls in predicting elections. Given that record going back many years, that makes this all the more surprising. Also, the best reason for doing pre-election polls is not to handicap the horse race, but to identify the issues that voters care about, the things that are most appealing to them and least appealing to them, to see how various voter groups are dividing. Who are the main players among voters, and why?

That’s much more important than the back and forth of who’s ahead. And if the result of the New Hampshire failure is to throttle back from the horse race, that’d be a pretty good result of some pretty bad polls.

CJR: How realistic do you think that something like that might happen based on one bad night?

GL: I must tell you, I’ve got a couple of correspondents in South Carolina and Michigan, today, and both of them called me up today to ask what the polls are showing there.

CJR: Right. The day after…

GL: Survey research has a long history, and a useful history. If we in news organizations don’t go out and do these polls, and attempt to find out accurately, an in a valid, and reliable, and meaningful way what people are thinking, its not as though no one else would either. We would simply have pundits, and campaigns and interest groups doing it on their own, with perhaps dubious methods and means, and trying to spin allegedly what they’d found out.

CJR: One thing you’ve mentioned on your blog is the so-called Bradley effect, when white voters overstate support for black candidates. You’ve written while that’s a possibility, it’s also a crutch that pollsters can lean on when results go bad.

GL: I’m pretty skeptical of this notion. It goes back to a handful of bad polls many years ago. And there have been plenty of other bi-racial races since then. So if it’s not a consistent effect, I don’t think it’s really a provable effect at all. There are a lot of other places to look. And they’ll be a lot of looking done.

CJR: And do you think that sort of analysis could help prevent something like this again?

GL: You never know. Pre-election polling can be and is very complicated. It is the hardest kind of polling. Opinion polling is simple. You have a known population—everyone with a landline. Take a random sample, ask them what they think, thank them and you’re done. With pre-election polling, you’re polling for an unknown—people who are actually going to vote. They won’t exist, this population, until Election Day. Estimating who they are is the hard part.

CJR: Do you anticipate a bigger effort, beyond internal reviews by individual pollsters, to sort this out?

GL: There have been calls by our professional organization, the American Association for Public Opinion Research, to pull together some sort of review panel, to take a thorough, competent look at what’s occurred here. That’s certainly something I’d support.

Clint Hendler is the managing editor of Mother Jones, and a former deputy editor of CJR.