What should we make of the latest tally showing that Republicans fare worse with factcheckers than Democrats do? Last week the Center for Media and Public Affairs, a nonpartisan research group based at George Mason University, reported that, so far during Obama’s second term, GOP statements were three times as likely as claims from Democrats to earn “False” and “Pants on Fire!” verdicts from PolitiFact’s Truth-O-Meter—and only half as likely to be rated “True.” The lead of a brief write-up by Alex Seitz-Wald at Salon.com seemed to take the results at face value:
Many politicians stretch the truth or obfuscate to some degree or another — but does one party do it more than the other? According to a new study from the Center for Media and Public Affairs at George Mason University the answer is an unequivocal yes.
Or, maybe, not so unequivocal: As conservative media watchdog NewsBusters was quick to point out (and as Seitz-Wald acknowledges), the results can also be read as evidence of selection bias at PolitiFact. The press release from the CMPA hints at this interpretation; it notes that the GOP fared worse even in May, despite “controversies over Obama administration statements regarding Benghazi, the IRS, and the Associated Press.” A quote from the group’s president, Robert Lichter, sounds the note again: “While Republicans see a credibility gap in the Obama administration, PolitiFact rates Republicans as the less credible party.”
PolitiFact itself, meanwhile, did its best to stay out of the fray. A brief letter from founder Bill Adair noted simply that the factchecking outlet rates individual statements and doesn’t claim to gauge which party lies more. “We are journalists, not social scientists,” Adair wrote. “We select statements to fact-check based on our news judgment—whether a statement is timely, provocative, whether it’s been repeated and whether readers would wonder if it is true.”
This story has a familiar ring by now. In 2009, political scientist John Sides tallied a few dozen Truth-O-Meter verdicts on claims about healthcare reform, and found that Republican statements earned the two worst ratings almost three times as often as Democrats. He noted the potential for selection bias but concluded, “the data accord with what casual observation would suggest: opponents of health care reform have been more dishonest than supporters.” In 2011 another political scientist, Eric Ostermeier, found the same three-to-one ratio after counting up more than 500 PolitiFact rulings over 13 months. He drew the opposite conclusion: “it appears the sport of choice is game hunting—and the game is elephants.”
Whatever the reason, a similar pattern seems to hold at The Washington Post’s Fact Checker blog, where by his own counts Glenn Kessler hands out more Pinocchios, on average, to Republican statements. The differences tend to be slight—e.g., a 2.5-Pinocchio average for the GOP versus 2.1 for Democrats in the first half of 2012—and Kessler attributes them to electoral dynamics rather than to any difference between the parties. But an analysis of more than 300 Fact Checker rulings through the end of 2011, by Chris Mooney, found a telling detail: Republicans received nearly three times as many four-Pinocchio rulings. Even controlling for the number of statements checked, they earned the site’s worst rating at twice the rate of Democrats.
These tallies cover different periods and weren’t compiled according to a single methodology. Still, the broad pattern is striking: Republican statements evaluated by factcheckers are consistently two to three times as likely to earn their harshest ratings.
So—for the proverbial engaged citizen (or journalist, or political scientist) who’s looking for clues about the nature of our political discourse, is there any meaning in that pattern? Obviously, the issue of selection bias can’t be ignored, since factcheckers don’t pick statements at random. Does that mean, as Sides wrote last week (seeming to depart from his earlier view) that the data simply don’t “say all that much about the truthfulness of political parties”? Or even, as Jonathan Bernstein added in the Post, that while we should be grateful for the research factcheckers assemble, we should throw out their conclusions altogether?