The New Gatekeepers

“Newsworthiness,” Trump, and the Facebook Oversight Board

April 26, 2021
AP Photo/Jeff Chiu

The Facebook Oversight Board, a company-appointed body tasked with making content moderation decisions, is modeled on the American courtroom. The board aspires to objective standards and delivers irreversible judgments; borrowing legal vocabulary, it speaks of “appeals” and “eligible cases” and promises to issue explanations styled in the manner of judicial opinions. In the case of Donald Trump’s suspension from Facebook, soon to be decided, the board will evaluate a key concept from American jurisprudence and journalism: “newsworthiness.” The term is due for an update.

Facebook first established a policy of “newsworthiness” when the 2016 presidential campaigns were underway, announcing that speech on its platform was protected from removal if it was “newsworthy, significant, or important to the public interest.” The context was critical: at the time, journalists across the country had just been struggling with whether and how to report on the vulgar “Access Hollywood” tape, and Facebook had been criticized for censoring a Pulitzer Prize-winning photo from the Vietnam War. The “newsworthiness” exception provided Facebook with cover to allow content that would otherwise violate its community standards. In this scenario, to be newsworthy meant that people wanted to see something; if people wanted to see it, then the content was newsworthy—the logic was circular. Virality proved its own justification, even as Facebook’s algorithms determined what people saw in the first place.

In the wake of the election, however, criticism of Facebook rose to the point that the company sought out an “objective” content moderator, if only to mitigate public outcries. Thus emerged the Oversight Board. Conceived in 2018, and activated in October 2020, its remit is to rule on cases that have been contested. This is a tricky proposition on a platform that is global in scope, given that laws and norms differ around the world. In the board’s earliest decisions, it has demonstrated a commitment to evaluating not only the specifics of the case at hand, but also the broader philosophical and policy implications. As it makes its determination on the removal of Trump—whose account was suspended in the wake of the January 6 Capitol insurrection—it has the potential to transform the concept of newsworthiness on social media. In essence, the board faces a question that has bedeviled newspaper editors for decades: If the public voraciously devours his every word, is Trump’s every action thereby newsworthy? Or should Facebook exert editorial judgement?

Underlying the board’s decision-making is an uncomfortable truth: the “harm” that Facebook worries about—the same harm they now cite to justify Trump’s removal—was made possible because of the company’s own circular concept of newsworthiness. Trump’s midnight tweets and Facebook posts, which went viral via social media’s affordances and algorithms, generated headlines in the mainstream press; the press coverage, reinforcing the material’s newsworthiness, then permitted Facebook to exempt Trump’s posts from moderation. The terminus of this loop, as we know, was the Capitol insurrection, which led to the deaths of five people. But unless and until Facebook reforms its application of the newsworthiness exemption, the board will be forced to judge cases with reference to criteria that indemnifies dangerous activity in the first place. Donald Trump’s Facebook behavior is on trial, but its accomplice is at large: Facebook’s policy.

Complicating matters further is Facebook’s ability to make or break news stories. If Facebook’s algorithms didn’t relentlessly promote Trump’s messages, they might not have gone viral, or at least not with the same alacrity. Just look at the past few months: by blocking Trump from their platforms, Facebook, Twitter, and other social media giants appear to have significantly reduced the media coverage about him. If social media made Trump newsworthy, we now know, too, that it can make him less so.

 

Sign up for CJR's daily email

To escape from Facebook’s circular logic of “newsworthiness,” we have two choices: we can either redefine the term, or we can rethink how it is deployed as a justification in content moderation. The first option is tricky. Redefining a widely used term is hard, and would likely encounter resistance. Free speech advocates are understandably reluctant to enable platforms (or Facebook’s Oversight Board, for that matter) to determine what is in “the public interest”—a phrase that often appears alongside “newsworthy.” Many are quick to argue that the public interest is highly subjective or, alternately, that a large profit-seeking corporation may not be appropriately positioned to evaluate it. When the courts have been asked to define “legitimate public interest” in the context of libel cases, they have largely demurred, preferring to rely on editors’ judgments and individual context. Unfortunately, that leaves us where we started. The second option—rethinking newsworthiness in content moderation—is more intriguing.

One possibility is to reverse Facebook’s current position. Consider the following thought experiment: A Facebook user writes something to his audience of ten million. What he says is borderline harmful, though it’s not a clear-cut case. Still, the speech is likely to be newsworthy—in our viral age, when ten million people begin talking about something it will probably become “news”—so the potential damage is high. Furthermore, once this speech has reached ten million accounts, it will become increasingly difficult to remove the message should it prove dangerous in the end. If another Facebook user, this person with an audience of ten, writes the same sentence, the speech is no different, but the potential harm is limited. If the message were to incite violence, the scale would be much more restricted, and Facebook would have an opportunity to intervene before it goes viral. In the latter case, someone’s lack of newsworthiness should make the content more permissible, but in the current enforcement structure, paradoxically, the ordinary person with the small audience is more likely to be moderated and removed.

We should instead follow the “Peter Parker principle”: with great power comes great responsibility. As a user’s audience grows, his speech becomes more potentially newsworthy, and the scrutiny applied should increase correspondingly. Employing this logic would have immediate consequences on the Facebook Oversight Board’s Trump ruling. If even a tiny fraction of Trump’s followers interpreted his speech as an incitement to violence, that violence will be multiplicatively more damaging than that of an average Facebook user. Rather than making a newsworthiness exception for powerful public figures like Trump—as Facebook currently does, and the board may very well do—they should do the reverse.

To be clear, the board doesn’t necessarily have to uphold Trump’s ban. The board could increase scrutiny on Trump’s speech but nevertheless conclude that his speech clears that higher scrutiny. Trump is not the only world leader on social media who has chosen to use the affordances of tech platforms to touch the line of incitement and, as scholars have pointed out, even some at Facebook worry that the newsworthiness exception might privilege American perspectives to the detriment of Facebook users elsewhere.

Nor was Facebook the only platform that chose to suspend Trump’s accounts following the insurrection at the Capitol. Twitter and others have policies related to newsworthiness, and to content “relevant to the public interest.” These organizations must also re-assess their approach to moderation; the Facebook Oversight Board’s decision is likely to factor into their decision-making. 

At a minimum, the newsworthiness exemption at Facebook and on other social media platforms ought to be reevaluated for the digital age. Internet virality means that elites no longer decide what is news. The idea of “newsworthiness” as we once conceived it is either anachronistic or largely synonymous with virality. In either case, we ought to reconsider how we apply it. Anything else means relying on social media companies to determine its usage for themselves. This year’s insurrection, among other things, has demonstrated the risks of that approach.

ICMYI: Spies, Lies, and Stonewalling: What It’s Like to Report on Facebook

Renee DiResta and Matt DeButts are contributors to CJR. DiResta is the technical research manager at Stanford Internet Observatory, where she studies and writes about narrative manipulation, information operations, and trust and safety issues online. Her writing has appeared in The Atlantic, Wired, and elsewhere. DeButts is a Knight-Hennessy Scholar, journalist, and Communication PhD student at Stanford University. His writing has appeared in the Los Angeles Times, Foreign Policy, and elsewhere.