campaign desk

He-Said She-Said and Death Panels

A Q&A with the Manship School’s Dr. Regina Lawrence
May 27, 2011

Almost two years ago, former governor of Alaska Sarah Palin sent out her infamous “death panels” post on Facebook. The media couldn’t resist the incendiary update. Soon, the two-word phrase—a demonstrable misrepresentation of the end-of-life counseling options proposed in the president’s health care plan—turned up the heat at Town Hall meetings and lodged itself in the discourse like a tough string of beef between two closely-packed molars.

How did this colorful lie wedge its way so far in? Dr. Regina Lawrence of Louisiana State University’s Manship School of Mass Communications asks that very question in a new study co-authored by Matt Schafer, which you can read about in Schafer’s article, “Two Years Later: The Media Response to Death Panels and Why It’s Still Important.” For the report—to be presented at this weekend’s International Communications Association Conference in Boston—Lawrence and Schafer analyzed over 700 newspaper stories from the country’s top fifty newspapers, and a number of network news reports. The pair sorted the stories that debunked the claim from those that did not, and then noted those that dutifully included quotes from all sides, probing just how effective the media was at getting to—and sometimes getting around—the facts. Here, Lawrence discusses her report with CJR assistant editor Joel Meares.

Seven hundred stories. On a Sarah Palin quote. How long did it take you and Matt to complete the study?

It took longer than we anticipated because the first time we went about coding all of these news stories we went in with the assumption that the stories would either debunk the claim or they would present the claim in a he-said/she-said manner. Because we went in with that assumption, our coding scheme didn’t have a way of handling stories that did both. That led to difficult judgment calls we weren’t comfortable with. After doing all of that we finally realized that, “Wow, this is actually a really interesting finding,” but we need to go back and redo the entire sample, which took months.

What hypothesis were you trying to test with the original coding scheme?

It was not so much hypothesis-testing as exploring a research question. We didn’t go in—to put it in scientific terms—with “directional expectations.” We just wanted to see how journalists treated this claim. Knowing that it had been debunked early and fast by PolitiFact and Factcheck.org, we were very curious how journalists made use of that information and how they treated the claims.

Sign up for CJR's daily email

Why did you decide to look at the death panels story as opposed to other stories involving misinformation that had proliferated?

Part of it is simply timeliness. This whole project came out of a class I was teaching that my co-author Matt Schafer was in. I had made some sort of offhand comment, like, “Oh I bet even death panels are getting covered in some certain kind of way…” It was just talking off the top of my head, but Matt went out and gathered a small sample of articles and then came to my office and said, “Look at this, I think I have something interesting here.” This would have been the fall right after the summer of the raucous Town Hall meetings, so it was timeliness that drew us to it.

But on a deeper level, it’s a little bit easier than doing something like climate change, which is a very big and complicated issue—you can’t tie it down to a single claim very easily. Here, you have a case of misinformation that was crystallized into a single catchphrase.

Do you see parallels between the way that the death panels story got out there and misinformation about climate change?

Absolutely. You’ve got pretty organized message machines on both sides of the issue, but I think a very organized message machine on the more right-leaning side of the media and political aisle. And you’ve got a public that is increasingly polarized. That’s not just an offhand observation, that’s something that a lot of research is showing: people are much more innately dividing into camps that could be defined as more liberal or more conservative on these kinds of issues. It’s almost like these issues become shortcuts for people to have their predispositions activated. Say “climate change” and people start to sort themselves into one side of the room or the other. Based on the available data, I think the same is true with death panels. If you say that word in a lot of rooms full of people, I think you would see a fair amount of sorting going on.

Why did you decide not to look at cable news? That would seem to be where a lot of death panel coverage was happening.

There were several reasons, all intertwined. First of all, cable is something that we might not characterize as part of the mainstream media, whatever the heck that is. There are so many problems with terminology these days. But we wanted to start with news organizations that still skewed towards basic norms and routines of mainstream journalism, such as objectivity, however that might be practiced. We wanted to just start with the news organizations that you would expect to play referee rather than just take sides on the issue. Cable is a logical next step—it’s something that we need to do in the research.

You coded the articles as having debunked and not debunked the claim, and then noted whether they had quoted both sides of the story, as well. But did you have a working clear definition of what “debunking” the story meant?

I think as we parsed it more finely the second time around we were able to recognize more subtlety. But just as a first cut we went for something very simple, which is: does the reporter, in his or her own words, say the claim was false, misinformation, misleading… we had a whole long list of the terms. In a way, that’s very blunt, and I suppose the most objective way that we could go about deciding whether the claim has been debunked or not.

But we also included some other measures. We were interested again in whether they relied on non-partisan fact-checking organizations like PolitiFact and Factcheck. We did code for the articles that made reference to those kinds of organizations to sort of bolster the point that this was not a true thing.

Then, in a more subtle way, a reporter can write a story in such a way that all of the information in the story leads to one conclusion, but the reporter never comes out and says that this claim is false. You can report that Sarah Palin calls something death panels, and then you report other people who say that’s not true. You never have to say it in your own words, but you can structure it so that there is no supporting information in the story. When we went back around the second time we also did code for that.

When you were going through these death panel stories, what kind of stories were you finding? Were they policy reports or were they stories about people saying incendiary things? It strikes me that sound bites were the things generating the headlines.

We were actually surprised to find just a small handful of stories that actually looked at the policy discussion around so-called death panels. The death panels claim may be false in and of itself, but it comes from a larger, very important question: How are we going to provide for people who are going to have end-of-life counseling in a way that gives doctors incentive to do a good job of it, knowing they’re going to get reimbursed? That’s kind of the heart of the policy issue there. That’s what morphed into this claim about government bureaucrats deciding who will get care and who will not. We found literally less than five stories, if I remember correctly, in this whole sample of hundreds of stories, that actually talked in real depth about end-of-life counseling and the complexities and challenges of that. When the death panels claim came up, it quite often came up in a political context, as part of a political debate, part of day-to-day coverage of politics more than of policy.

Did you notice any patterns where particular news outlets did the debunking frequently and strongly?

The standout there, and it’s not surprising, is The St. Petersburg Times, because they’re the home of PolitiFact. By far they were the standout newspaper in terms of consistently and repeatedly running long stories that were really PolitiFact stories.

Do you think that outlets like PolitiFact and Factcheck are actually breaking through, though? It seems that if the misinformation is being believed, and becoming prevalent, then those outlets playing referee are having trouble getting to a wider audience.

If you look at just our data, you will be surprised and dismayed to find that it was really a small fraction of stories that said, “According to PolitiFact,” or, “According to Factcheck…” the claim is false. It was really just a handful of stories, which surprised us. But it’s impossible to say how much they are shaping the background information environment that reporters are working in. How much are reporters going to Factcheck, for example, reading what they have to say, and not necessarily building that into their story in an explicit way. That is very hard to know without just interviewing a lot of journalists.

Are they breaking through to the larger public? Again, that’s hard to know, because we can’t just rely on the mainstream news coverage that we looked at in this study to tell us whether or not they’re breaking through. I am sure PolitiFact and Factcheck have their own metrics for figuring out how much traffic they’re getting, but I’m not familiar with those. The larger problem is that it’s increasingly hard to talk about “breaking through” and persuading the public as a whole because of this greater polarization, politically and in terms of media choice. People now have the means to tune in to the news that’s most congenial to their views.

In the report you make a distinction between two journalistic tools, what you term “procedural objectivity” and “substantive objectivity.” What are these and how are they different?

Those are kind of overly academic terms; they’re not terms that I think journalists would use in a news meeting. We were casting around for a terminology, because everybody—journalists, scholars—throws around the term objectivity, and it’s a very complicated thing.

Procedural objectivity is following the formulas that promise that, at the end of the day, you will have a balanced, or at least not a biased, story. The he-said she-said formula. Substantive objectivity is a much harder and riskier thing to do. If all of the evidence points to something being false, then, no bones about it, that’s how you treat it. Don’t go through the motions of saying, ‘Well, but of course people believe something else on the other side.” That’s sticking your neck out, I think, for mainstream reporters.

Interestingly, in this study on death panels, there were a fair amount of stories where journalists just said it in their own words—“this is false, this is misleading.” So they’re perfectly comfortable doing it sometimes and under some circumstances. And I don’t even know how thoughtful that was because in many cases it wasn’t bolstered by evidence. There was no “Here’s three reasons why it’s false.” They were just saying it.

I was struggling to see the value of procedural objectivity at all by the time I got to the end Matt’s story on the report. It’s something that we’re taught from the very beginning of our careers or our studies. What is the value of it?

I think if I were one of my colleagues, who is a dyed-in-the-wool professional journalist teaching basic journalism classes, I would give a very different answer… It is, as you know, a very tried-and-true way of playing it down the middle. I think the real value of that kind of reporting is that it’s easy, it’s formulaic—which in these times when reporters are pressed to do more and more with less and less, formulas are valuable—and it’s safe. It’s harder to be accused either by your editor, or by readers, or by critics on the other side of the aisle of being biased if you’ve covered both sides, and what they have to say.

So it’s a lot more valuable to the reporter than to the reader?

I like the way you put that.

You mentioned this earlier and it comes up often in the report: journalists who debunk a claim like death panels and then quote both sides anyway, can confuse readers. Is that something you found statistically or was it just an observation?

Based on these data we can’t say whether it actually confuses the reader. In fact, the next step we would like to take with this research is to very carefully construct news stories using these elements of debunking, not debunking, he-said she-said, no he-said she-said. Then we get that into an experimental lab setting where we could get people reading different kinds of formats and really find out whether people really are more confused at the end of the day in that situation. So I can’t really say it is confusing; it’s more of a hunch at this point.

The report seems to suggest that a straightforward debunking without any he-said she-said is a less confusing way to address misinformation. But in some cases you’re going to have to quote both sides, even if you are debunking a story, because the quote itself is the story.

In the birther issue, for example, the story was that Donald Trump, a prominent businessman, has come out and said the president was not born in this country, not that there was any chance this was true. To do the story you have to include the falsehood. Or do you just not do the story and thereby prevent the idea from being reinforced?

That’s exactly the dilemma. We could even say that’s the trap. Anybody who wants to make news just needs to come up with some sort of claim like this and they can be relatively certain that journalists will use exactly that logic and help them publicize their claim. No matter what else they do with it the claim will be out there in the discourse. The real question in my mind then becomes whether news organizations decide that when something is definitively shown to be false, they’re just not going to cover it anymore. That would be a real redefinition of what news is. Because, as you said, if Donald Trump says it, then Donald Trump saying it is the story. That’s a certain way of doing journalism, to say that the news is what important and notable people say. There’s the rub.

We just had a symposium at the Manship School a couple of months ago, and Amy Walter of ABC was there and Dan Balz of The Washington Post was there. I asked them this very kind of question: What do you think of just not covering something because it’s not true? Amy’s response was, but that is the story. If somebody says it, it’s the story, and I have to cover it. And Dan Balz’s response was that you’re just assuming the media have way more power and authority than they really have in this day and age. End of discussion.

Playing devil’s advocate, is there a danger in pushing people to be stronger arbiters? If tomorrow everyone in the media reads this report and says we’re going to go the “substantive objectivity” route, would that enhance the polarization you spoke about before? It seems the place you see more of this kind of objectivity and people fact-checking claims is on ideological websites, where it is their task to swat down the lies—or truths—of the other side.

I think you’re on to something, and that is that we feel we don’t have a model for what this would look like. Because as soon as you suggest to most American journalists that they should engage in substantive objectivity, they immediately think of the scenario you just described. They think of ideologically motivated reporting or politically biased reporting. It’s almost as though we don’t have a model for imagining what this other kind of journalism would look like.

That’s fascinating, because it’s exactly in my view what PolitiFact, in particular, is practicing: trying to get to the factual truth as best they can and subjecting Barack Obama and Sarah Palin to the same scrutiny. We feel we’re at a loss for a model, and yet there are models. But somehow they haven’t caught on or don’t seem compelling enough in this economic climate. I’m not sure what the obstacle is.

The story I read on the report ends on a pretty dire note, citing research from Brendan Nyhan that suggests there is little to show that journalism can correct misinformation. Is that what you really think? That this is a hopeless situation.

Nyhan experimentally exposed people to corrections and found that they didn’t really make much difference. And, also, because of this problem of predispositions, people coming to a news story with their worldview already set, it’s going to be hard for mainstream journalism to shake that up and offset it. That’s probably too high to set the bar, to imagine you’re going to be able to persuade all the people who believe in death panels that they’re not true.

It also overestimates the size of that particular audience. That’s the audience that gets a lot of attention because that’s how cable news is figuring out how to be economically successful. I think what mainstream journalism has to do is figure out a clearer sense of their audience and a clearer sense of what they can bring to the table that cable news and ideological news can’t.

I wonder if there isn’t a bigger role for just hard-headed fact-checking. It appears to me that a lot of people would appreciate it. People on both sides of the aisle are going to try to swat it down and fight it at every turn. But those aren’t the only Americans out there.

Joel Meares is a former CJR assistant editor.