Journalists depend on a free and unfettered internet to deliver news in their own communities and around the world. But, increasingly, they find themselves victimized by the same freedom. In some instances, the news they produce is drowned out by a flood of misinformation, disinformation, lies, and clickbait. In others, they are harassed and threatened by armies of vicious trolls, sometimes organized by governments, intended to hound them into silence.
How on earth should journalists respond?
David Kaye, the UN special rapporteur for freedom of expression and a law professor at University of California at Irvine, has some suggestions.
In Speech Police: The Global Struggle to Govern the Internet, out this week, Kaye argues that social media companies should be guided by international human rights law rather than “company values” or “community standards.”
“It’s time to put individual and democratic rights at the center of corporate content moderation and government regulation of the companies,” Kaye writes in the book’s introduction. “Around the world, the global standard is not ‘Congress shall make no law. . .’ but rather the Universal Declaration of Human Rights’s Article 19 protection, ‘Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.’”
Decisions about what posts on social media to leave up and which to take down—often called “content moderation”—should also be subject to review by an independent, non-governmental body made up of experts, Kaye suggests. Governments should regulate to ensure that social media platforms are transparent about their policies and decision-making and to ensure that there’s a legal process to decide the hardest questions. Content moderation, Kaye argues, is a “freedom of expression issue. But it’s also a clear press freedom question.”
This approach has a number of advantages. First of all, human rights are universal, global, and standardized. Everyone everywhere has certain human rights and these are protected by law and practice. Nearly every country in the world has agree to respect therese rights though the ratification of treaties.
Companies can tap into a whole body of international law in making difficult decisions. For example, international human rights law makes clear the specific and limited circumstances in which expression can be legally restrict speech to protect the rights of others, national security, or public order.
Relying on human rights law—rather than a vague and subjective term like community standards—would allow companies to push back more effectively against repressive governments that want them to censor critical journalism. If companies become more transparent about their rules—and are open to oversight by some sort of independent body composed of experts drawn from different fields—then the public can participate in the debate about content removal (and journalists can report on it). Companies have resisted this approach for business reasons, Kaye argues. “We tend to see more speech as good, and that happens to align with the business model of the platforms, because more speech is more data and better training for algorithms.”
How might such a system work when the shoe is on the other foot—that is, when journalists are looking to remove content? I asked Kaye about two recent cases.
In a Twitter thread posted last week Carlos Maza, a video journalist with Vox, complained that YouTuber Steven Crowder had continuously posted rants mocking Maza’s sexual orientation and ethnic identity. Each video rant incited a troll army, who swarmed Maza’s Instagram account with abuse. Maza has called on YouTube to remove Crowder from its platform, which the company declined to do. In a tweet, Kaye criticized YouTube for its lack of a clarity and consistency in enforcing its own rules.
“If companies have rules that, instead of being based in terms of service, are rooted in human rights, they could better articulate to the public why harassment is a problem for the platform and why harassment is a problem for freedom of expression,” Kaye argues. In other words, the question YouTube should have sought to answer was not whether Crowder’s rants constituted harassment, but whether they were an attempt to deprive Maza on his right to free expression. An attempt to deprive Maza of his voice and audience could also impact the human rights of other online users, who have a right to access information under international law.
In a blog post published this afternoon, YouTube, which is owned by Google, announced it would remove thousands of videos expressing extremist views, including neo-Nazism and white supremacy.
I also asked Kaye about the case of Maria Ressa, the investigative journalist from the Philippines who is facing not just legal challenges but constant online harassment instigated by that country’s president, Rodrigo Duterte. “American social media technology platforms, once empowering, are now weaponized against journalists, activists, and citizens, spreading lies across borders,” Ressa noted in a speech last year.
Here, Kaye acknowledges, the issues are more complex. Facebook has become an essential source of information in so many places around the world, including the Philippines. The problem, one of its own creation, is that the companies have been so “opaque about their rule making and rule enforcement” that the public has no meaningful way of debating content moderation.
The risk, of course, is that well-intentioned efforts to improve content moderation could empower the companies to censor speech and give license to governments, particularly repressive governments, to compel them to do so. What I like about Kaye’s approach is that he sees a vital but limited role for governments—ensuring transparency on the part of the companies and providing access to the legal system to ensure the rights of users are protected.
“My book is not making the argument that any of this is easy, but we haven’t even started the conversation about what the rule of law looks like in this context,” Kaye notes. “Do we want companies making these decisions, or to do want to make these public decisions?”
Journalists should certainly be covering the debate, and media companies and press freedom organizations need to be more actively engaged. The future of journalism, and the of the internet as a shared global resource, depends on getting this right.