campaign desk

How do you know what a poll number is worth?

After survey scandals, transparency’s the buzz word
July 21, 2010

In late June, Markos Moulitsas Zúniga, the publisher of DailyKos, published a study on his site that he said pointed to fraudulent data from the site’s former contract pollster, Research 2000.

Moulitsas had stopped commissioning polling from the firm earlier in the month, after Research 2000 came out at the low end of comparative pollster ratings put together by FiveThirtyEight proprietor Nate Silver. But that was a decision based on a perception of poor quality—not one based on an accusation of outright fraud.

Moulitsas and Research 2000 seem to be heading towards their day in court, where the truth of Kos’s charges could be sorted out. (Kos has sued them for breach of contract, misrepresentation, and fraud, and Research 2000 has, in turn lawyered-up and sent the site a letter demanding they stop claiming its polls were fraudulent.)

But the episode—coming about nine months after another high profile scandal involving an outfit called Strategic Vision, which, after facing allegations that it had faked data, essentially dropped out of the polling world and was censured by the American Association of Public Opinion Research (AAPOR) for failing to disclose “basic facts” about its polling methods—has restarted a debate about what steps media organizations and the polling industry can take to minimize the commissioning and spread of faulty or fraudulent data.

There is of course, a difference between faulty and fraudulent. The simplest way for a news organization to prevent against outright fraudulent data—specifically, made up numbers that reflect no attempt to reach survey respondents by the phone—is to perform an extremely simple physical check by visiting the call center while a poll is being conducted, or asking to remotely listen into some of the calls as they are done.

“In most instances the issue isn’t outright fraud, though,” says Frank Newport, editor in chief of Gallup and the current president of AAPOR.

Sign up for CJR's daily email

Indeed, allegations like those levied against Research 2000 and Strategic Vision (and, for the record, both organizations have denied that they did anything wrong) are very rare. Far more common are pollsters, whether through sloppiness, frugality, or a desire to have results turn out one way or another, who engage in an array of methodological corner cutting—faulty questionnaire design, poor analysis of results, inadequate procedures to ensure random samples, and so on.

While there’s no exact step-by-step methodology to ensure a perfect poll across different populations and topics, there are certain best practices that news organizations commissioning polls would be wise to ensure their contractors are following.

“I don’t presume to set standards for others,” says Gary Langer, ABC’s polling director and a long time advocate for high media standards in poll reporting. “But I do presume it is essential to have them.”

“There are many places going off the rails and there is a shared responsibility between the firms providing the data, and the organizations purchasing the data,” says Langer. “You’re putting your reputation on the line. You’re putting an important element of public discourse out there.”

Langer recommends that that any news organization interested in sponsoring survey research begin by researching reputable survey methods. (He dismisses the notion that this work requires mathematical genius—“you don’t need to be a statistician to do polls any more than you need to be a grammarian to do poetry.”) Langer says that should be followed by detailed discussions and contracts with the polling firm about methodology, where expectations are made plain.

“At the end of the day, you can get snookered. But the job is to make it as difficult, maybe impossible, to do that. That starts with due diligence,” says Langer.

Relying and reporting on third party polls—published by public relations firms, nonprofits, trade associations and so forth—represents another kind of challenge.

“Journalists think that if they source something, they tend to be off the hook,” says Newport. Merely citing someone else’s numbers—without also taking simple steps like evaluating the polling procedures or inquiring into the reputation of the pollster—is as irresponsible, Newport says, as a science or health journalist trumpeting a press release promising some new wonder cure without checking with disinterested experts.

Jon Cohen does that sort of screening for polling and survey data that journalists at the Washington Post wish to cite.

The Post maintains a short black list of pollsters—which Cohen would not disclose—who, by Cohen’s judgment, operate with serious methodological deficiencies, or have committed “serial misanalysis,” on the scale that their data is banned from publication. Other blue-chip organizations’ polls are trusted enough to be allowed into print without additional vetting. For pollsters in between, Cohen maintains a checklist on the paper’s intranet that helps reporters determine poll quality, and he is often called upon to decide whether data meets the paper’s standards.

“All the methodological concerns aside, at the end of the day, like any business, there are honest and dishonest brokers, and it takes a bit of time to figure it out,” says Cohen.

The Washington Post’s unit—in some ways modeled after Langer’s ABC shop, where Cohen used to work—also designs and commissions the paper’s own polling, where, since the paper can control every decision and step, even more exacting standards are imposed than those the Post applies to third party research.

AAPOR, one of the polling industry’s most prominent trade organizations, has wrestled since its founding—and largely passed on the opportunity—to set or impose standards, according to Peter Miller, a Northwestern professor who recently served as AAPOR’s president. Instead, they’ve opted to encourage polling organizations to release information about how they conducted their polls—information that not only would give poll consumers a basis for evaluating their work, but that theoretically could be used to replicate their results.

Miller stepped down in May, but is still helming AAPOR’s “transparency initiative,” the organization’s newest effort to encourage such disclosure.

The planned initiative is a step beyond AAPOR’s long standing code of ethics. This document calls on polling organizations to release for any public poll fifteen points of information upon the poll’s publication, or within a month of any request. This includes the survey questionnaire, information on sampling, the study’s funder, and screening procedures.

But there are very rarely any consequences for failing to live up to the code’s requirements.

“By our rules, we respond only when a complaint is made,” says Newport, the association’s current president. ”We don’t have a committee that sits and judges all the polling out there.”

While Miller says that many complaints on incomplete compliance with the code’s disclosure requirements are settled behind the scenes, the organization has only publicly censured three pollsters for lacking disclosure in the last twenty years or so—in the 1990s, Frank Luntz; and in 2009, an Iraqi mortality researcher and Strategic Vision.

“It’s not a lot,” concedes Miller.

The transparency initiative seeks to reverse that order. Rather than wait passively for complaints about pollsters’ non-compliance with AAPOR’s transparency requirements, and issue rare censures, AAPOR will instead allow pollsters who proactively disclose key aspects of their methodology to claim that they are operating in accordance with the initiative—offering something of a seal of approval for disclosure, if not for methodology or results. Miller is hopeful that by setting up an infrastructure that makes disclosing the information easier, it will encourage more pollsters, especially those who haven’t done so regularly in the past, to regularly share information about their methodologies.

“If you do sponsor a poll, one of the things you could for is look at our participating organizations,” says Miller. “If they are one of the group who haven’t signed up with the transparency initiative, you might encourage them to do so, or you might think about what they are putting out in another way.”

The transparency initiative is a long way from being ready. At one point Miller had hoped to have it running in time for the 2010 elections—now the target date is next May’s AAPOR annual meeting.

Miller envisions a Web form that participants can use to upload their information to a central public repository. (They’ve reached out to the University of Connecticut’s Roper Center for Public Opinion Research, which stores polling data from a variety of pollsters, but no agreement has been reached.)

But before any pollster could start inputting and disclosing, there are a score of outstanding issues that AAPOR members who are serving on the volunteer committees tasked with organizing aspects of the initiative must sort out, including fundamentals like exactly what information participating pollsters will be required to disclose, and how quickly they will have to do so.

While Miller says the existing AAPOR code’s disclosure requirements provide a “template” for what the initiative could require, he says they may “go beyond that.” And in the wake of the Strategic Vision episode, Miller has decided that it will not be adequate to take pollsters at their word that their disclosure information is accurate. He plans on setting up some system to verify, perhaps through spot checks, what pollsters are self-reporting.

The goal of the transparency initiative is to bring more information to the polling market, not only so people can more easily judge the quality of conducted polls, but so that commissioning organizations can more easily compare what they might buy.

“Some people act like polling is just a commodity. You buy it for as cheap as you can get it. We can change that attitude,” says Miller. “I very much hope that sponsors of surveys will use this resource to judge where they’re putting their money. If the money continues to go to operators that aren’t transparent, that will make this only a feel good thing. If the money goes to organizations that have been transparent, then this will work. The people being transparent hope this will bring them more business, and I sure as heck hope they’re right.”

Clint Hendler is the managing editor of Mother Jones, and a former deputy editor of CJR.