Nearly half a million people had already seen the video before Dan Ilic tried to upload it to Facebook. A self-professed “investigative humorist,” Ilic manages the Facebook page for Hungry Beast, an Australian comedy show. The video—titled “Is labiaplasty the new fad?”—had been circulating on YouTube since the Hungry Beast released it in 2011, and Ilic wanted to repromote it after seeing mentions of labiaplasty in the news. But soon after clicking “Post,” he received a notice from the social site that the content had been removed and that he was banned from logging in for 24 hours. There was no additional explanation.
— Dan Ilic (@danilic) January 16, 2016
A few days later, Ilic posted an edited version of the video that began with its own notice: “This story has been edited to meet Facebook’s guidelines.” In the edited version, Mark Zuckerberg’s poorly photoshopped face obscures any potentially offensive material throughout the six-minute video.
Ilic’s experience highlights the changing nature of censorship. Until recently, Ilic’s choice to publish would have been an editorial decision, the kind news organizations make every day, and limited only by the law of the land. Today, it’s also limited by the laws of Facebook.
Social media platforms dominate today’s information ecosystem. More than 60 percent of Americans get their news on Facebook or Twitter, and that number is growing. News sites and social platforms have an increasingly symbiotic relationship—each looking to the other to boost traffic and business. As this relationship grows, social media’s content regulations will increasingly affect what publishers publish.
This marks a fundamental shift of power from government to private corporations, calling into question the means by which we protect, limit, or debate free speech. Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, says publishers should consider what this means for them. “The rules under which they’re publishing are no longer law,” says York. “They’re proprietary terms of service.”
York is also the co-founder of Online Censorship, an organization that offers users a place to report incidents of social media censorship, in the hopes of bringing more transparency to the process. The site, which launched in November 2015, is both a resource for social media users looking for recourse and a way to collect data and track censorship across multiple platforms.
Ilic’s video likely violated Facebook’s terms of service, which includes a ban on genitalia—one of many filters Facebook uses to weed out inappropriate content. But it isn’t always easy to ascertain why content is censored. In March 2015, University of Waterloo student Rupi Kaur had a photo twice removed from Instagram. The photo showed Kaur in bed, fully dressed but with menstrual blood leaking through her pants and onto the sheet. In April, Crain’s New York was temporarily banned from Facebook for promoting a cover story about the legalization of pot in New York State. And in August, Facebook blocked links to a Center for Immigration Studies report on the large number of jobs being filled by immigrants with a notice that the links included “content that other people on Facebook have reported as abusive.” In each instance, Facebook said the removal was an error and reinstated the content.
Some governments are taking advantage of social media’s newfound power, pressuring them to further national security interests. In the US, that’s meant enlisting social media companies to help fight ISIS and other forms of extremism. Earlier this month, a bevy of Silicon Valley firms—among them Google, Facebook, and Apple—met with national security officials at the White House to discuss ways to fight terrorism online. Items on the agenda included a discussion on how to make it harder for terrorists “to leverage the internet to recruit, radicalize, and mobilize followers to violence,” according to a memo published by The Guardian.
For the government, acting through social media can be a way to bypass due process. “If Congress passed a law trying to outlaw some of the content that the US government wants tech companies to delete and censor,” says Trevor Timm, director of the Freedom of the Press Foundation and CJR columnist, “it would be struck down as unconstitutional.”
York, of the EFF, says one of the goals of Online Censorship is to shed light on how content is moderated, as well as how social sites define terms like “hate speech” and “terrorism.” Much of social media’s editorial guidelines are a black box, inaccessible to the public since they belong to private companies.That lack of transparency means it’s unclear what factors go into the decision to take down a post. Among the data Online Censorship collects is the language of the removed post, whether the poster was an individual or an organization, and the reason given for the removal.
In one case, Facebook removed a cartoon that was critical of Israel, implying that the nation silences criticism by labeling it as anti-Semitic, and suspended the associated account for three days. It was unclear whether the post was removed because of the cartoon itself or the accompanying post, and whether it had triggered an algorithmic response or had been reported by users. Additionally, says York, it’s not always possible to know if content on Facebook is taken down due to a government request or because a user violated the social giant’s terms of service.
— Palestine Info Centr (@palinfoen) January 11, 2016
Sarah Meyer West is a strategist at Online Censorship, and part of her job is to review incoming reports. An especially concerning incident, she says, came from Mariana Diaz, a writer and lawyer in Venezuela. In November, Diaz had posted to Facebook a short docudrama by a Venezuelan artist about the conditions of political prisoners in a Caracas prison known as “The Tomb.” But she noticed soon after uploading the video that it had disappeared. When she tried to reupload it, she got a message that “the action was disallowed,” but was given no further explanation.
West says the report stood out because content is usually removed once it’s been posted, but in this case the link to a third-party site was itself blocked.
News organizations are increasingly publishing directly to social media, using native platforms like Medium, Snapchat Discover, or Facebook’s Instant Articles. Cory Haik, who was the director of emerging news products at The Washington Post when the paper opted in to Instant Articles, says that’s the future of publishing. “If I can project five years from now, referral traffic won’t be a thing,” she says. “It will be native distribution of content.”
Haik, who is now at Mic, believes publishers still have leverage with the social giants, adding that she’s witnessed “a really great exchange between publishers and platforms in the last year.”
York, on the other hand, hopes newsrooms will find alternatives to native platforms. “It’s essentially ceding power to a corporation both in terms of privacy and speech,” she says.
Timm, of the Foundation for Freedom of the Press, agrees that publishers should avoid relying too heavily on any one platform; otherwise, playing the role of watchdog becomes difficult. Newsrooms “need to be aggressively reporting on the practices of Facebook,” says Timm.
Unlike individuals who feel they are unjustly censored and have no way to appeal, journalists and news sites have a public platform, thereby returning the conversation of what should and should not be protected speech back into the public sphere.
As god and the founding fathers intended.