There were several reasons, all intertwined. First of all, cable is something that we might not characterize as part of the mainstream media, whatever the heck that is. There are so many problems with terminology these days. But we wanted to start with news organizations that still skewed towards basic norms and routines of mainstream journalism, such as objectivity, however that might be practiced. We wanted to just start with the news organizations that you would expect to play referee rather than just take sides on the issue. Cable is a logical next step—it’s something that we need to do in the research.

You coded the articles as having debunked and not debunked the claim, and then noted whether they had quoted both sides of the story, as well. But did you have a working clear definition of what “debunking” the story meant?

I think as we parsed it more finely the second time around we were able to recognize more subtlety. But just as a first cut we went for something very simple, which is: does the reporter, in his or her own words, say the claim was false, misinformation, misleading… we had a whole long list of the terms. In a way, that’s very blunt, and I suppose the most objective way that we could go about deciding whether the claim has been debunked or not.

But we also included some other measures. We were interested again in whether they relied on non-partisan fact-checking organizations like PolitiFact and Factcheck. We did code for the articles that made reference to those kinds of organizations to sort of bolster the point that this was not a true thing.

Then, in a more subtle way, a reporter can write a story in such a way that all of the information in the story leads to one conclusion, but the reporter never comes out and says that this claim is false. You can report that Sarah Palin calls something death panels, and then you report other people who say that’s not true. You never have to say it in your own words, but you can structure it so that there is no supporting information in the story. When we went back around the second time we also did code for that.

When you were going through these death panel stories, what kind of stories were you finding? Were they policy reports or were they stories about people saying incendiary things? It strikes me that sound bites were the things generating the headlines.

We were actually surprised to find just a small handful of stories that actually looked at the policy discussion around so-called death panels. The death panels claim may be false in and of itself, but it comes from a larger, very important question: How are we going to provide for people who are going to have end-of-life counseling in a way that gives doctors incentive to do a good job of it, knowing they’re going to get reimbursed? That’s kind of the heart of the policy issue there. That’s what morphed into this claim about government bureaucrats deciding who will get care and who will not. We found literally less than five stories, if I remember correctly, in this whole sample of hundreds of stories, that actually talked in real depth about end-of-life counseling and the complexities and challenges of that. When the death panels claim came up, it quite often came up in a political context, as part of a political debate, part of day-to-day coverage of politics more than of policy.

Did you notice any patterns where particular news outlets did the debunking frequently and strongly?

The standout there, and it’s not surprising, is The St. Petersburg Times, because they’re the home of PolitiFact. By far they were the standout newspaper in terms of consistently and repeatedly running long stories that were really PolitiFact stories.

Do you think that outlets like PolitiFact and Factcheck are actually breaking through, though? It seems that if the misinformation is being believed, and becoming prevalent, then those outlets playing referee are having trouble getting to a wider audience.

If you look at just our data, you will be surprised and dismayed to find that it was really a small fraction of stories that said, “According to PolitiFact,” or, “According to Factcheck…” the claim is false. It was really just a handful of stories, which surprised us. But it’s impossible to say how much they are shaping the background information environment that reporters are working in. How much are reporters going to Factcheck, for example, reading what they have to say, and not necessarily building that into their story in an explicit way. That is very hard to know without just interviewing a lot of journalists.

Joel Meares is a former CJR assistant editor.