Illustration by Jackie Ferrentino

The Fact-Check Industry

Has our investment in debunking worked?

December 4, 2019

Personally I think the idea that fake news on Facebook, of which it’s a very small amount of the content, influenced the election in any way, I think is a pretty crazy idea. 

—Mark Zuckerberg, November 10, 2016

 

If the presidential election of 2016 revealed one thing more surprising to the mainstream press than the popularity of Donald Trump, it was just how profoundly dysfunctional their publishing environment was. Suddenly, it became clear that social media news feeds were populated with all manner of propaganda, falsehoods, and political advertising—there was a crisis of misinformation in a privately owned public sphere. After not too long, governments and technology platforms responded by embracing the power of fact-checking. 

As many parts of the journalism world shrank, fact-checking grew. According to the 2019 Duke University Reporters’ Lab census, 44 fact-checking organizations existed five years ago; there are 195 now. Angie Drobnic Holan, the editor of PolitiFact, has two explanations. “Firstly, the internet made it practical from a time point of view,” she says. “And, secondly, the increasing sophistication of political messaging made it necessary.” The checking these organizations do is not the internal prepublication review found at well-resourced magazines, but an independent business of verification, debunking, and correction of untruths that have already been published—and spread widely on the Web. The new age of fact-checking may therefore be interpreted as journalism adapting to the needs of its digital environment. But it might also be seen as a dismantling of journalism’s traditional role and a reconstruction of its workflow, flexing to suit the priorities and the ideologies of the tech companies now paying checkers’ salaries. 

At the beginning of the fact-checking boom, a handful of independent newsrooms pioneered the practice as an internet-focused discipline. Among the first were Snopes.com, in 1994, with a focus on urban myths and oddities; Spinsanity (now defunct); and then, in 2003, the more seriously political FactCheck.org. These ventures took advantage of the internet to publish instant rebuttals and corrections; FactCheck.org in particular sought to go beyond the “he said, she said” of stodgy campaign reporting. Both FactCheck.org and PolitiFact—which started inside the newsroom of the St. Petersburg Times (now the Tampa Bay Times), in 2007—were founded by journalists motivated to ditch stenographic political reporting and instead evaluate politicians with a neutral cool.

There is now a sector of fact-checking philanthropy, fueled by Google, Facebook, and nonprofit foundations.

Holan remembers reading the first “birther” emails suggesting that Barack Obama was not born in the United States. “With hindsight, I do now wonder if that was a foreign influence campaign,” she says. “I remember thinking, ‘Who has the time to do this?’ Well, now we know exactly the type of operations that produce these things.” It wasn’t until later, when Facebook added a “share” button, that social media virality became possible and the number of unfounded claims took off. The “share” button first appeared in 2006 and went mobile in 2012. By the start of the 2016 presidential race, coordinated propaganda campaigns, legitimate political advertising, and even for-profit troll farms were working their way through an increasingly opaque, algorithmically driven social media ecosystem. 

Since then, newsrooms like BuzzFeed have successfully turned investigating misinformation on social platforms into a potent media beat. New techniques and technologies employed by enterprising investigative journalists everywhere from the New York Times to Bellingcat, an independent site, have challenged conflicting reports on subjects as diverse as the shooting down of Malaysian airliner MH17 and the killing of Jamal Khashoggi, the Saudi journalist. People like Glenn Kessler, at the Washington Post, and Linda Qiu, at the Times, function as dedicated checkers on stories their colleagues are simultaneously reporting.

Sign up for CJR's daily email

Outside newsrooms, money is pouring in to set up new types of organizations to combat misinformation. There is now a sector of fact-checking philanthropy, fueled by Google, Facebook, and nonprofit foundations. As a result, the Duke count noted, last year forty-one out of forty-seven fact-checking organizations were part of, or affiliated with, a media company; this year, the figure is thirty-nine out of sixty. In other words, the number of fact-checking organizations is growing, but their association with traditional journalism outlets is weakening.

 

After Donald Trump won the presidency and stories were reported—many of them by Craig Silverman, of BuzzFeed—about fake news having potentially influenced the election, a group of fact-checking organizations knocked on Facebook’s door. Facebook executives agreed to work with what is now known as the International Fact Checking Network (IFCN), a nonprofit housed at the Poynter Institute, to recruit and manage an army of fact checkers who would flag misinformation in Facebook posts.

Checkers would either review stories fed through a Facebook dashboard—which appear at a rate of more than two thousand a day, with many duplicates dressed in different URLs—or they would choose their own method. There are now more than fifty-five member groups involved. Facebook doesn’t disclose how much it pays them, but Snopes, which stopped working with the company in February, has reported earning $100,000 from Facebook in 2017 and $406,000 in 2018. FactCheck.org has said that it received $188,881 from Facebook in 2018 and $242,400 this year.

Automated checking relies on the availability of verifiable sources—something that is complicated and human.

In the project’s initial phase, fact-checking partners were frustrated by a lack of transparency in their relationship with Facebook. Professor Mike Ananny, of the University of Southern California, produced a Tow Center report highlighting one of the early concerns: “An almost unassailable, opaque, and proprietarily guarded collective ability to create and circulate news, and to signal to audiences what can be believed—this kind of power cannot live within any single set of self-selected, self-regulating organizations that escapes robust public scrutiny.” In addition, an emphasis on nonpartisanship opened the door to partners such as Check Your Fact, which is funded by the Daily Caller, a right-wing site that is itself adrift from truth. The Weekly Standard was welcomed in, too; last September, when it “fact checked” a story by ThinkProgress about Roe v. Wade, Judd Legum, the founder and former editor in chief of ThinkProgress, described Facebook’s fact-checking initiative as a “farce.” And this September, Facebook announced that it will not fact check material posted by politicians, even when it violates the company’s rules.

Baybars Örsek, director of the IFCN, credits Facebook for paying fact-checking organizations more consistently than anyone else does. But have the efforts been successful? “It’s certainly not too early to tell if this work is having an impact,” he says. “But we would like to be able to prove it is working. We still lack the data and the detailed picture to really prove the impact of the Facebook project.”

The chance might not come soon. “Fact-checking is appealing to a company like Facebook because it addresses an immediate problem on their platform,” Ananny tells me. “But it also fits with a worldview which is common in technology companies that says: you have two contesting claims and you investigate them and then the one you deem correct wins.” That shrinks other aspects of journalism that go beyond “right” and “wrong”—things like empathy and context.

 

There are almost as many different types of fact checker as there are kinds of misinformation. For Professor Lucas Graves, of the University of Wisconsin and the author of Deciding What’s True (2016), that’s a reason for skepticism. “There is at least a potential gap between fact-checking organizations and what they say their mission is, such as combating political rhetoric, and the reality that they are fighting all kinds of misinformation,” he tells me. That concern is complicated by their sources of funding. “It is undeniable that governments’ and technology platforms’ interest in this has been pushing the field to focus on misinformation,” Graves says. “It raises a lot of questions around both mission and methods.”

The development of automated fact-checking, for instance, made possible by natural language processing, presents a test case for how well algorithms can vet information—a task some journalists have been uneasy about handing over. Full Fact, based in the United Kingdom, has entered that arena. “We have seen three distinct waves of fact-checking,” Tom Phillips, the editor of Full Fact, explains. “The first was about claim review and just making the process of fact-checking transparent. The next phase was for fact-checking seeking accountability and actively pursuing corrections and retractions. The third wave is to make that work at ‘internet scale,’ which means really being a force multiplier for human expertise.”

As information-sharing moves from public platforms like Twitter to closed networks like WhatsApp, fact-checking will become more and more difficult. 

Bill Adair, founder of PolitiFact and a Duke University professor of journalism and public policy, is similarly bullish about the need to “close the time and space gap” through automation. In doing so, he argues, checking will be more effective at reaching an audience; he is also working on ways of representing the results visually. “It is a whole area of uncharted territory in terms of what users respond to,” he says. “Do we use facts? Or do we use symbols? We need to do much more work on this.” 

Still, automated checking relies on the availability of verifiable sources—something that is complicated and human; checkers at those well-resourced magazines work long hours to parse and interpret information. And then there’s the matter of speed: in an ideal Silicon Valley process, data would be ubiquitously available and checkers would be equipped with tools to interrogate, sort, and rebroadcast in response to the breaking-news cycle. The reality is far messier.

There are people trying to clear the brush. After Silverman reported on the influence of fake news in 2016, he was joined by a reporter named Jane Lytvynenko. “I really spent my first few weeks in the job just compiling a massive database of partisan sites, which meant not only visiting every one of their websites, but every one of their Facebook pages and each of their social media accounts,” Lytvynenko recalls. “I fell quite far down the rabbit hole, and at the end of the hole there is often really quite unpleasant things.” 

Aspects of her work, Lytvynenko says, can often be boring and slow, but she’s good at it. Along with Silverman, she is starting to track potential disinformation campaigns ahead of the 2020 election. Sometimes she tweets out requests for help finding misinformation about major news events, and she’s been pleased to find that awareness of fact-checking seems to be paying dividends with the public. “When people send me pictures after a storm or hurricane now, they might say ‘I reverse-image-searched this,’ which is certainly new behavior,” she tells me. “So the debunking process really can work as media literacy, too.”

The coming challenge for Lytvynenko—and everyone—is the migration of fake news from public platforms like Facebook and Twitter to messenger groups on phone apps like Facebook Messenger, WhatsApp, and Signal. The new frontier involves targeted, ephemeral, closed sources of misinformation, and too few people understand how that works or where to look. Newsrooms can hardly afford to hire more employees; Lytvynenko’s own newsroom has been slashed by layoffs. “We have great training networks doing good work,” she says. “But if you asked if every news organization had one or two people who could take on that role
then I honestly don’t think you would have the numbers.”

Emily Bell is a frequent CJR contributor and the director of Columbia’s Tow Center for Digital Journalism. Previously, she oversaw digital publishing at The Guardian.