Or to put the question another way: Does it say nothing that retiring Rep. Michele Bachmann—the Minnesota Republican who is famous for claiming, for instance, that the HPV vaccine causes mental retardation—compiled almost unbelievably bad records with the factcheckers during her years in the House? Bachmann set a new bar for four-Pinocchio statements in Kessler’s column, and as a presidential contender in 2012 averaged 3.08 Pinocchios across 13 checked statements, the worst of all the candidates. Meanwhile her first 13 statements checked by PolitiFact earned “False” or “Pants on Fire” verdicts; in all, a remarkable 60 percent of her 59 Truth-O-Meter rulings fall into those two categories. No doubt Bachmann has made many true statements in office, but she kept the factcheckers well-supplied with irresistible falsehoods (as other journalists have pointed out).

Cases like Bachmann’s show why general acknowledgments of “selection bias” are so unsatisfying. Her extraordinarily bad ratings, compiled over so many statements, offer a window onto the particular ways in which factcheckers deviate from random selection as they choose claims to check every day. (It’s important to remember that this is part of a work routine. In my own experience watching and working with fact-checkers, as field research for my dissertation, to find claims worth investigating day after day took real digging.)

There are a few obvious ways the factcheckers behave differently from a computer algorithm plucking claims at random from political discourse. First, they ignore statements that are self-evidently true. Second, they try to stay away from things that aren’t “checkable,” like statements of opinion. (Critics often accuse them of failing this test). And finally, the factcheckers are susceptible to a constellation of biases tied up in journalistic “news sense.” They want to be relevant, to win awards, to draw large audiences. They pick statements that seem important, or interesting, or outlandish. They have a bias toward things that stand out.

In practice, then, while factchecking is non-random, it’s non-random in ways that do tend to support certain inferences—cautious, qualified inferences—about the state of public discourse. Factcheckers don’t reliably index the truth of all political speech. Some kinds of dishonesty won’t show up in their ratings at all. A Republican (or a Democrat, for that matter) could argue that, for instance, the president’s rhetoric misrepresents his policies in a way that’s far more significant than anything a Michelle Bachmann might say.

But the factcheckers’ actual, working biases make a reliable filter for a certain kind of crazy, a flagrant disregard for established facts. Collectively and over time, their ratings seem to offer a mechanism for identifying patterns of political deception at the extremes. If a cluster of prominent Republicans consistently draws the worst ratings, we can start to ask questions and draw conclusions about political discourse on the right. And if the counter-argument is that the factcheckers consistently ignore or downplay outrageous claims from Democrats, that case needs to be made on the merits.

It’s clear why the factcheckers don’t make pronouncements about which party is more deceptive. To do so would invite charges of bias, and run the risk of coloring their judgment of individual claims. But that doesn’t mean we should dismiss their data outright, or that we can’t draw reasonable conclusions from it over time.

If you'd like to help CJR and win a chance at one of 10 free print subscriptions, take a brief survey for us here.

Lucas Graves is an assistant professor in the school of journalism and mass communication at the University of Wisconsin. Follow him on Twitter at @gravesmatter.