Tow Center

Checking in with the Facebook fact-checking partnership

April 4, 2018
 

Facebook and five US news and fact-checking organizations—ABC News, the Associated Press, FactCheck.org, PolitiFact, and Snopes—created a partnership to combat misinformation shortly after the 2016 US presidential election. When it was launched, it was variously seen as a public relations stunt, a new type of collaboration, or an unavoidable coupling of organizations through circumstances beyond their exclusive control.

Over a year later, how has the partnership fared? In a new report for the Tow Center for Digital Journalism, I look at how this partnership works and what it can tell us about how news organizations and technology companies collaborate. My focus here is neither “fake news” nor fact-checking, but the “partnership press” that emerges when technology companies and news organizations team up.

The partnership centers around managing a flow of stories that may be considered false. Through an online dashboard accessible only to partners, Facebook curates a list of these stories. The news organizations then independently choose stories to debunk, adding their fact-checks. Facebook uses these to change how it algorithmically surfaces stories for its users.

ICYMI: CNN frustrates viewers with gun comment controversy

What I uncovered is a partnership in which news organizations negotiate their professional missions with Facebook’s, and struggle to understand the coalition’s impact. Both Facebook and the news organizations want to improve the quality of online media. But while the fact-checkers largely define their motivations in terms of public service and journalistic ethics, Facebook is looking to adhere to its official community standards. There is an ongoing struggle within the partnership to define “fake news” in a way that doesn’t leave most of the classification power with Facebook, and a general unease among partners about how opaque and unaccountable much of the arrangement is—both within the partnership and to outsiders.

From August 2017 through January 2018, I conducted a mix of on-the-record, background, and off-the-record interviews with six senior people, representing four of the six partners. For the purposes of the report, I have granted these sources anonymity. A report draft was shared with all interviewees, who were invited to comment on its findings before publication. No one responded with feedback.

Sign up for CJR's daily email

READ THE FULL REPORT HERE.

Origin stories

The partnership has no single origin. Some people I spoke with saw the 2016 election as its impetus; others highlighted the International Fact Checking Network’s open letter to Facebook, which urged the platform to start a transparent conversation about accuracy in the news ecosystem. Some cited Facebook’s admission that it may have played a role in spreading election misinformation, and its employees’ desires to make solutions and public amends. Still others saw it as a quick public relations win.

Regardless, many people from the partner news organizations saw themselves as public servants, trying to “empower the people who are being talked over,” “reduce deception in politics,” and create “example-driven” systems that reduce the spread and impact of “fake news.” Many joined the initiative because they wanted to understand the “provenance” of fake news, “empirically examine the worst of the worst,” and prevent future failures. There were those who doubted Facebook’s agenda, saying that the company was using fact-checkers for “cheap and effective PR [public relations]” and enrolling them in the company’s experiment-driven culture. “It’s been tinkering with the democratic process for years,” one person told me. “It’s ugly.”

Priorities

The fact-checking dashboard largely reflects Facebook’s priorities, not fact-checkers’, many said. It “provides links to stories that have been flagged by users, or maybe algorithms, I don’t know,” one fact-checker noted. In the dashboard interface “[we can sort] those stories by popularity. We’ve asked [Facebook] a hundred ways to Sunday what popularity means. We don’t know the mechanism they use to determine popularity.”

Another suspected—but could not prove—that stories with high advertising revenue potential would never be listed as popular because “sometimes fake news can make money,” and Facebook likely wouldn’t want fact-checkers debunking lucrative stories. True or not, partners didn’t view the dashboard as a neutral tool, but as a place of clashing priorities, mistrust, and suspicion.

We aren’t seeing major conspiracy theories or conservative media—no InfoWars on the list. That’s a surprise.

They also wanted more from it. One said, “We should be doing work on memes. The partnership doesn’t address memes, just stories. We’ve had these conversations with Facebook, it’s something they say they want to do but haven’t done it.” Another wanted to fight false advertising, to debunk the paid, partisan content circulating on Facebook. Several questioned if Facebook was hiding stories from them. “We don’t see mainstream media appearing [in the dashboard]—is it being filtered out? Is it not happening?” Another said, “We aren’t seeing major conspiracy theories or conservative media—no InfoWars on the list. That’s a surprise.” 

Access and publicness

Most were critical of the lack of transparency both within the partnership and to outsiders. Several fact-checkers stressed that they wanted to talk openly with me, but could not because Facebook had banned them from speaking publicly—an irony, they noted, given Facebook’s interest in transparent, verifiable, and publicly trusted accounts. Many interviewees took personal and organizational risks just by speaking with me off the record. Others noted that even if they were willing to break non-disclosure agreements and speak publicly, they still lacked detailed knowledge about the partnership and couldn’t offer deep insights if they wanted to.

Still others discussed public accountability beyond the partnership itself, and decided to speak with me because the partnership needed outside scrutiny. As fact-checkers, they saw their public responsibilities differently from how Facebook did and worried that the secrecy surrounding the partnership was harming its legitimacy and the trust that their profession relies on.

Leverages and types of power

Facebook initially offered partners no compensation to join the network. Later, in June 2017, the platform volunteered funding. Despite public reports that “in exchange for weeding through user flags and pumping out fact-checks, Facebook’s partner organizations each receive about $100,000 annually,” several partners denied accepting money. They said to do so would breach their independence and suggest that they were in Facebook’s service. Others gladly took it. One person told me, “Our model is that if we do the work, you need to buy it. Facebook is using it, and benefitting from it, so we should be compensated.”

Paid or not, many saw Facebook as controlling the phenomenon that they care about most: helping people debunk misinformation and access high-quality news. Facebook supplied the raw goods (false stories), was the gatekeeper of popularity, and controlled which fact-checks reached which audiences. It was an invitation they couldn’t refuse.

Partners described their own leverage, though, in adhering to the International Fact Checking Network’s principles and the reputations of their own long-standing brands, which they felt insulated them from those charges of self-interest and bias that are often leveled at Facebook. Fact-checkers also cited their power to self-organize. Although they rarely coordinated with other organizations and knew little of how they worked with the dashboard, at one low moment in the partnership several partners banded together to discuss its future, and to tell Facebook they felt taken for granted, used as public relations cover, and ignored. None would tell me, though, if this confrontation changed anything.

ICYMI: “She identified herself as a reporter. He then walked behind her and punched her in the side of the head” 

Categories and standards

Two types of categorization work drive the partnership. The first involves how misinformation is identified, distributed among partners, and fed back into Facebook’s proprietary systems. Despite claiming that fact-checkers define “fake news,” Facebook controls if, when, and how stories appear on the fact-checkers’ dashboard. Partners derided the dashboard’s inscrutable categories: “Facebook doesn’t say where the [dashboard] list comes from or how it’s made.” One person noted that “people flag stories at some rate, but I don’t know what the dashboard’s threshold is.”

Facebook doesn’t say where the [dashboard] list comes from or how it’s made.

In the second type of categorization work, fact-checkers have more power: They independently choose which stories to work on, how to debunk them, and how to label them as false. Partners largely did not coordinate their work, saying, “I think our dashboard is the same as other organizations’ but I’m not sure,” and “We tried to do a Slack channel [amongst partners], but it didn’t really work as people got too busy.” This local control over standards was both the preference of partners—“we’re not really interested in what competitors are doing”—and aligns with Facebook’s public statements that multiple fact-checks are needed before a story is classified as disputed. One partner said they “like multiple fact-checkers with multiple methods” because diversity of methods made the partnership’s collective judgment stronger and more defensible.

Managing scale

One reason Facebook created the dashboard was to move some of the work of mass-scale judgment to news and fact-checking organizations. To do this, the platform needed to restrict the flow of content it sends to fact-checkers, reduce their workload, and avoid overwhelming their relatively small organizational capacity. The need and responsibility to, as one partner put it, “deal with scale” is inescapably Facebook’s. One person added, “Facebook knows how to deal with scale, we don’t.”

However, many partners described unease with this arrangement, expressing confusion around how the dashboard represents the large number of potentially false stories. One said there were currently “2,200–2,300 stories in the queue right now,” but that “about 75 percent of them seem to be duplicates” (other partners agreed with these numbers). It was hard to know the true scale of the misinformation problem because, they said, although the dashboard had many stories, their URLs often looked similar and it was unclear which stories were circulating uniquely.

One person summarized the confusion over the scale of misinformation, saying, “I can’t entirely answer the question [about scale]. I’d need to identify how many stories could be done versus how many we’re doing. I don’t know what the entire universe is, we just do what we’re able to do. My sense is we could do more if we had more resources. . .I don’t know what the full scope is.”  

Timing

Almost everyone I talked to stressed a desire to get ahead of misinformation by debunking stories before they could travel far and wide. One fact-checker said that their work within their own organization was mostly prompted by readers’ questions, but “by the time we answered the question, it was usually days after the [misinformation] had gone viral. We would wait until something had critical mass before we wrote about it, which felt too late.” Many described the need to anticipate virality, with one person seeing it not only as an important way to minimize misinformation’s impact, but also as a way to reduce fact-checkers’ workload.

Partners made up folk theories about time, and seemed resigned to ignorance about the dashboard’s rhythms: “We don’t know how time works on it, but it seems pretty quick—an event will happen and then we see it. Probably a lag of 24 hours. We look at it and you can’t really tell; we’re told it’s updated every day.”

Fact-checkers commonly expressed skepticism about whether the dashboard’s timing aligned with the kind of urgency they thought fact-checking required. One cautioned against placing too much emphasis on speed and anticipating virality, saying, “It’s never ‘too late’ because fake news stays around for a long time. . .These stories have long, long, long tails. It doesn’t matter if it takes fact-checkers a few days to do their work [because] whenever there’s an opportunity for these stories to reappear, they will.”

Automation and ‘practice capture’

No partner worried about, or hoped for, automated fact-checking, but several described dynamics that suggest a kind of proto-automation. To populate the dashboard with potentially false stories, Facebook combines user feedback (people flag stories they see as false) and computational processing (algorithms sense misinformation). As fact-checkers interact with the dashboard—picking or ignoring stories, prioritizing content, judging story similarity, flocking to particular stories, appending rebuttals—they help Facebook’s algorithmic guesses about defining and sensing misinformation, guesses that are used to further populate the dashboard. This feedback loop is critical to iterating and refining the dashboard’s “example-driven” design. Fact-checkers teach machines which stories are false, which are relevant, and which are exemplars of misinformation—and those exemplars can be safely ignored. Their work is modeled and operationalized in a process I call “practice capture.”

It is unlikely that fact-checkers’ subtle judgements will be fully automated any time soon. More likely, as one partner put it, Facebook will gradually become better at confidently modeling fact-checking work, using fact-checkers for particularly difficult “edge cases,” and when expert bad actors invent new misinformation techniques. Fact-checkers will not disappear and algorithms will not displace them, but there is a very real potential that Facebook—through its dashboard feedback loop and ever-increasing amount of training data—will gradually narrow the space within which fact-checkers work.

Impact

It was often hard for partners to tell what effects, if any, the partnership was having. One said, “I just can’t answer that question [about impact] without data that only Facebook has.” Citing a leaked email in which Facebook claimed that a “news story that’s been labeled false by Facebook’s third-party fact-checking partners sees its future impressions on the platform drop by 80 percent,” a number of partners expressed skepticism, saying: “I don’t know how that number is calculated” and “we have no public proof of that” and “I can’t fact-check that claim, and that’s a problem.”  

From Facebook’s perspective, the partnership is successful if it reduces misinformation exposure, removes content that violates community standards, and contextualizes information that does get through. As partners work within these measures of success, they have little power to independently evaluate them and implicitly agree to Facebook’s metrics.

Some partners made their own success metrics, seeing increases in workload and funding as indicative. “We’re doing more now than we have been able to do, which is good. It seems to be effective enough.” Others said the partnership’s continued existence was evidence of success, citing the very fact that fact-checking organizations were partnering and cooperating with Facebook. Even without considering Facebook’s role, several fact-checkers said they were proud of the partnership’s demonstration that fact-checking was a mature field; the partnership has “raised the profile of fact-checking [and is] helping to reach new people and educate in new ways.”

A final aspect of impact focused on what partners think should happen to misinformation they identify. Echoing a marketplace model of speech in which the answer to bad speech is more speech, all partners agreed that misinformation should be removed from Facebook’s News Feed, but that it should continue to exist on web pages that the platform did not surface. Success was not seen as the eradication of false news outside of Facebook or as the broader media system’s health, but as an online marketplace of claims and counterclaims.

Change management

Finally, I consistently heard concern from partners about the management of the partnership itself. While several people expressed repeated appreciation for Facebook’s attention to their work, and for the platform providing resources that allowed them to hire staff and grow their capacities, none of the members was satisfied with the partnership’s current state.

Several complained about Facebook’s early, repeatedly unfulfilled promises to hold regular summits, have detailed meetings with partners, and share metrics data and infrastructure specifics. One partner, who expressed little dissatisfaction beyond a desire to know more about the impact of fact-checking, essentially said that except for information exchanged through the dashboard the partnership was virtually nonexistent: “I don’t have much back and forth, I don’t really hear from them.” While more recent moves by the company may alleviate some of these concerns—a roundtable was held in February 2018—complaints about the company’s responsiveness were widespread.

Only one partner departed significantly from others and argued for a more fundamental, existential change. Questioning the partnership’s ability to regulate itself, they said: “Facebook needed a [regulation] team from the outside. . . It needs an outside government to monitor them. . .Break up the company, they should not be going into news.”

Going forward

This partnership is still very much in flux as it decides which parts of it will be known to itself and others, what categories and standards it will assume, which visions of success and impact will guide its design, how skills will be shared and challenged, and how it will govern itself. This partnership focuses on fact-checking, but its blend of public-driven journalism and proprietary technology is not unique.

The partnership’s secrecy is one of its key challenges. Several people generously shared their reflections, but would not go on the record for fear of breaking confidentiality agreements. Many fact-checkers refused to talk at all, even off the record, and instead pointed me to public relations statements. Still others gave largely placating interviews that echoed marketing materials, stonewalling when it came time to offer original reflections.

I appreciate the challenges of such positions and understand the hesitations to talk with a researcher, but the interview process offered a kind of meta-conclusion: the partnership has a great deal of power, but is accountable neither among its own partners nor to outsiders. All partners are technically private companies without official obligations to discuss internal practices. But how can both news organizations and technology companies reconcile this lack of accountability with their service to the public?

This research was generously funded by the Knight Foundation and Craig Newmark Philanthropies.

Update: In an email to CJR, a Facebook spokesperson writes that ABC News had worked as a US partner for only two weeks before being replaced by The Weekly Standard. The company says all current partners are paid for their work, though it does not disclose how much, and adds that it changed its policy of requiring multiple fact-checks before it considers a story “disputed” in December.

ICYMI: In an editorial meeting, Texas Monthly staffers sat in “stunned silence” 

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »