analysis

Facebook, Twitter, and what news is fit to share

October 16, 2020

On Wednesday, both Facebook and Twitter took steps to limit the distribution of a news story from a mainstream publication, on the grounds that it was based on hacked emails and of questionable accuracy. Twitter actually prevented users from posting a link to the story, and in some cases prevented users from clicking on existing links to it, instead showing them a warning with a message saying the story violated the company’s terms of service. Facebook didn’t stop anyone from posting a link to the story, but reduced its reach by tweaking the News Feed algorithm so fewer users would see it.

The story was a New York Post report alleging that Democratic presidential candidate Joe Biden’s son, Hunter, introduced his father to the head of a natural gas company in the Ukraine. The source? Emails allegedly retrieved from Hunter Biden’s laptop by a computer repair shop and given to Trump attorney Rudy Giuliani. In Twitter’s case, the company argued that the story breached its policy against distribution of content obtained through hacking, and said documents included with the story also contained an individual’s identifying information, which is against privacy rules. Facebook, meanwhile, said its position against “hack and leak” operations required it to reduce the distribution of the story while it was being fact-checked by third-party partners.

Unsurprisingly, these moves triggered an avalanche of censorship accusations from conservatives. Sen. Josh Hawley went so far as to argue in a letter to the Federal Election Commission that removing the story was a benefit to Biden, and therefore amounted to a campaign finance violation, and said the Judiciary Committee will vote on whether to subpoena Twitter CEO Jack Dorsey to explain his actions. Others, including Sen. Ted Cruz, argued that Facebook and Twitter had breached the First Amendment. Rep. Doug Collins said that the blocks were “a grave threat to our democracy.” 

Such arguments ignore the fact Facebook and Twitter are protected by the First Amendment, and also by Section 230 of the Communications Decency Act, which allows them to make content-moderation decisions without penalty. Many of the arguments are also clearly being made in bad faith, and are a variation on the “platforms censor conservatives” canard that has been rattling around Congress for years without a shred of evidence. 

At the same time, however, it’s true that the decisions made by the two platforms are problematic. For instance, Twitter’s policy not to allow users to post “content published without authorization” is extremely vague, and could theoretically block not just questionable stories from the New York Post, but also valuable investigative stories based on leaked content, including the Pentagon Papers and virtually everything from WikiLeaks. (Late Thursday, the company said it has revised its policy, and will now apply labels instead of blocking users from posting links that refer to hacked material.)

The incident also highlights a broader problem with both platforms, and that is a lack of detail about their policies, and how and when they are implemented. Twitter CEO Jack Dorsey admitted that the company didn’t do a good job of explaining itself when it first blocked the Post story, but the followup wasn’t that helpful; while it said the story violated multiple policies, it didn’t contain a lot of detail about either one. Facebook, meanwhile, has a habit of just pointing to its algorithm as though it absolves the company of any need to explain itself, and routinely promises things that never come to pass.

Sign up for CJR's daily email

“There will be battles for control of the narrative again and again over the coming weeks,” Evelyn Douek, a lecturer at Harvard Law School, told the New York Times. “The way the platforms handled it is not a good harbinger of what’s to come.”

This episode is not only infuriating for those who would like some clarity on the decision-making at these platforms, but it makes it that much easier for bad faith actors to argue that the companies are doing something unsavory or illegal, which leads to show trial-style hearings that often amount to a lot of sound and fury, signifying very little. If we are to trust these giant tech corporations to make decisions around what kind of journalism can be shared on their networks, we’re going to need a lot more transparency and a lot less hand waving.

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.