In late June, Markos Moulitsas Zúniga, the publisher of DailyKos, published a study on his site that he said pointed to fraudulent data from the site’s former contract pollster, Research 2000.

Moulitsas had stopped commissioning polling from the firm earlier in the month, after Research 2000 came out at the low end of comparative pollster ratings put together by FiveThirtyEight proprietor Nate Silver. But that was a decision based on a perception of poor quality—not one based on an accusation of outright fraud.

Moulitsas and Research 2000 seem to be heading towards their day in court, where the truth of Kos’s charges could be sorted out. (Kos has sued them for breach of contract, misrepresentation, and fraud, and Research 2000 has, in turn lawyered-up and sent the site a letter demanding they stop claiming its polls were fraudulent.)

But the episode—coming about nine months after another high profile scandal involving an outfit called Strategic Vision, which, after facing allegations that it had faked data, essentially dropped out of the polling world and was censured by the American Association of Public Opinion Research (AAPOR) for failing to disclose “basic facts” about its polling methods—has restarted a debate about what steps media organizations and the polling industry can take to minimize the commissioning and spread of faulty or fraudulent data.

There is of course, a difference between faulty and fraudulent. The simplest way for a news organization to prevent against outright fraudulent data—specifically, made up numbers that reflect no attempt to reach survey respondents by the phone—is to perform an extremely simple physical check by visiting the call center while a poll is being conducted, or asking to remotely listen into some of the calls as they are done.

“In most instances the issue isn’t outright fraud, though,” says Frank Newport, editor in chief of Gallup and the current president of AAPOR.

Indeed, allegations like those levied against Research 2000 and Strategic Vision (and, for the record, both organizations have denied that they did anything wrong) are very rare. Far more common are pollsters, whether through sloppiness, frugality, or a desire to have results turn out one way or another, who engage in an array of methodological corner cutting—faulty questionnaire design, poor analysis of results, inadequate procedures to ensure random samples, and so on.

While there’s no exact step-by-step methodology to ensure a perfect poll across different populations and topics, there are certain best practices that news organizations commissioning polls would be wise to ensure their contractors are following.

“I don’t presume to set standards for others,” says Gary Langer, ABC’s polling director and a long time advocate for high media standards in poll reporting. “But I do presume it is essential to have them.”

“There are many places going off the rails and there is a shared responsibility between the firms providing the data, and the organizations purchasing the data,” says Langer. “You’re putting your reputation on the line. You’re putting an important element of public discourse out there.”

Langer recommends that that any news organization interested in sponsoring survey research begin by researching reputable survey methods. (He dismisses the notion that this work requires mathematical genius—“you don’t need to be a statistician to do polls any more than you need to be a grammarian to do poetry.”) Langer says that should be followed by detailed discussions and contracts with the polling firm about methodology, where expectations are made plain.

“At the end of the day, you can get snookered. But the job is to make it as difficult, maybe impossible, to do that. That starts with due diligence,” says Langer.

Relying and reporting on third party polls—published by public relations firms, nonprofits, trade associations and so forth—represents another kind of challenge.

“Journalists think that if they source something, they tend to be off the hook,” says Newport. Merely citing someone else’s numbers—without also taking simple steps like evaluating the polling procedures or inquiring into the reputation of the pollster—is as irresponsible, Newport says, as a science or health journalist trumpeting a press release promising some new wonder cure without checking with disinterested experts.

Jon Cohen does that sort of screening for polling and survey data that journalists at the Washington Post wish to cite.

The Post maintains a short black list of pollsters—which Cohen would not disclose—who, by Cohen’s judgment, operate with serious methodological deficiencies, or have committed “serial misanalysis,” on the scale that their data is banned from publication. Other blue-chip organizations’ polls are trusted enough to be allowed into print without additional vetting. For pollsters in between, Cohen maintains a checklist on the paper’s intranet that helps reporters determine poll quality, and he is often called upon to decide whether data meets the paper’s standards.

Clint Hendler is the managing editor of Mother Jones, and a former deputy editor of CJR.