The American Association for Public Opinion Research has released its long-awaited report on polling in the 2008 New Hampshire Democratic Primary, available here as a 123-page pdf document.
For those needing a refresher of the events of that long-ago week, most pre-voting polls showed Obama, fresh off his Iowa caucus victory, with a commanding lead. Instead, when the dust settled, Clinton had eked out a victory. The polling industry—and the campaign punditocracy—woke up the next morning to a big plate of crow, and the stage was set for Clinton and Obama to slug it out across the country for months on end.
“It’s a really big deal. I’ve never seen anything like it and I think anyone would be hard pressed to find another collapse this big,” ABC polling director Gary Langer told CJR at the time.
Today, Mark Blumenthal at Pollster.com already has his first post up in a promised series on the report, where he summarizes the report’s conclusion: “it was a likely lot of small things, all introducing errors in the same direction.” In other words, “a perfect storm.”
But as Blumenthal notes, the committee responsible for the report also exposed an “outrageous lack of disclosure and foot-dragging” on the part of pollsters, a lack of transparency that hindered the AAPOR’s inquiry. From the report:
The symbiotic relationship between campaign coverage and polling is a given in contemporary campaigns; it is hard to imagine one without the other. But polling is also a scientific data collection technique, and it is impossible to evaluate the performance of the pollsters without information about their methodology. That is why the AAPOR “Code of Professional Ethics and Practices” include a set of elements that those who conduct polls and surveys should disclose so that other professionals can evaluate the quality of the research that they conduct and the results that they disseminate. The committee’s experience suggests that some firms engaged in election polling pay only lip service to these disclosure standards, while others are operating at such a thin margin that they do not have the resources to devote to answering questions about their methods.
Earlier in the report’s introduction:
The fact that many pollsters did not provide us with detailed methodological information about their work on a timely basis is one reason we will never know for certain exactly what caused the problems in the primary polling that we studied.
The committee asked for data from twenty-three polling operations; only four honored the full request. Among the sponsors of the pollsters that didn’t—and thereby hampered the investigation—are many news organizations: Ebony, Jet, The Los Angeles Times, Politico, CNN, McClatchy, MSNBC, FOX News, WBZ, WHDH, Reuters and C-SPAN.
Ideally, when a news organization is party to a massive failure, they will—out of debt to their audience and in the interest of their long term credibility—work hard at determining what went wrong, to help ensure that the same errors are less likely to happen again. There’s no reason that shouldn’t be the case here.Clint Hendler is the managing editor of Mother Jones, and a former deputy editor of CJR.