Tow Center

The new ‘billion-dollar problem’ for platforms and publishers

March 6, 2018
 

As social platforms like Facebook and Twitter get bigger, so do their problems. One of the most urgent challenges platforms now face is balancing the quality of content they host with their commitment to providing a platform for everyone: From solving the spread of misinformation, to addressing harassment and abuse, to taking down offensive or illegal content, social platforms are under pressure from the public to take action.

This is a complicated calculation, as content moderation involves untangling numerous issues: information integrity, veracity, content quality, and legality. But platforms’ content moderation strategies—how they decide what stays up and what gets taken down—are enforced inconsistently, and are often opaque to the public whose conversations they shape. To top it off, users, platforms, and various governments all want something different, making it difficult to arrive at a consensus on what, exactly, moderation should look like.

RELATED: Facebook’s moderation is of public interest. It should be public knowledge.

Last week, a group of technologists, journalists, legal experts, and academics gathered at USC’s Annenberg School for Communication and Journalism to discuss the ethics of content moderation. Hosted by the Tow Center for Digital Journalism and the Annenberg Innovation Lab, “Controlling the Conversation: The Ethics of Social Platforms and Content Moderation” featured a series of discussions on the responsibilities of platforms and publishers in managing online conversations. Below are the takeaways.

One challenge for social media platforms is the sheer amount of content they must police. UCLA’s Sarah Roberts, who has spent eight years studying social media and user-generated content, pointed out that many of the problems we’re seeing now could have been avoided if social platforms had scaled more responsibly and transparently. Moderation of tricky content, she noted, should have been built into their products rather than treated as an afterthought. If moderation mostly happens in response to public outcry about particular pieces of content, platforms will always be playing a game of catch-up. Even as platforms have attempted to define community guidelines, their lack of transparency around what is taken down and why has only increased frustration. Users remain in the dark about the reasoning behind decisions to remove content or suspend accounts.

Sign up for CJR's daily email

Curbing controversial content is often at odds with both the business incentives and the ideological roots of large platforms founded with free expression on the open Web in mind. However, the past year and a half has seen platforms publicly commit to combatting harassment, misinformation, and abuse, which one representative of a large tech firm told Roberts was a “billion-dollar problem.” Just last week, Twitter’s Jack Dorsey solicited recommendations from researchers on increasing “the collective health, openness, and civility of public conversation around the world,” vowing to hold Twitter “publicly accountable toward progress.” (This is not the first time Twitter has attempted to curb abusive content, and past efforts have seen mixed results.)

TRENDING: How hacked emails and a yacht in Monaco ended my career at WSJ

One way platforms are attempting to identify and remove posts is through artificial intelligence. And while some form of automation is necessary for issues of this scale, as many of the attendees emphasized, AI should augment rather than replace the work of human moderators. Abhi Chaudhuri, a product manager for Google’s Conversation.ai, a research project that aims to “improve conversations online,” works with communities to understand their needs and build tools that curb hate speech and toxic content while promoting “diversity of opinions” and empathy. Chaudhuri’s team helped moderators at The New York Times cluster similar comments in order to expedite the comment reviewing process.

Platforms, however, are just one part of the equation. Speakers at the conference proposed a variety of ways users, publishers, and governments can improve the quality of online conversations.

While Europe has led the charge on platform regulation (think of the “Right to Be Forgotten” and Germany’s recent crackdown on hate speech), America continues to operate with the faith that, as Tow Director Emily Bell said, “the free market will protect a plurality of media voices and elevate the good over the bad.” And as platforms continue to scale globally, they will need to come up with guidelines consistent with regulations that apply around the world, such as human rights laws.

As for publishers, the Coral Project’s Andrew Losowsky warned that it is “dangerous for media companies to rely on third parties for direct communication with [their] audience” and recommended building commenting systems that do not rely on tools controlled by Facebook (something the Coral Project provides). As publishers look to alternative strategies for community management, a more radical option may involve models of collective ownership. Caroline Sinders, who researches online harassment at the Wikimedia Foundation, explained how Wikimedia provides users with volunteer and paid opportunities to work on community projects, giving them equity within a larger system. New platforms for journalism like Civil, built on the Etherium blockchain, offer journalists and audiences models for “open governance.”

Finally, moderation workers should have a seat at the table. Moderators are located around the world; some work at offices in Silicon Valley, and others in call center environments in places like India and the Philippines. Their work tends to be low-pay, low-status, and mentally taxing, as moderators see the very worst of the internet. As The Atlantic’s Anika Gupta said at the event, managers must design workflows that take all of this into account (so that the most difficult parts of the job are divided evenly) and provide appropriate mental health resources to prevent burnout.

The challenge for companies and governments, in the end, is to approach the issues raised by moderation thoughtfully and to make meaningful change without acting in haste.

ICYMI: Edit tests are out of control, say journalists in search of jobs

George Civeris is a research fellow at the Tow Center. Follow him on Twitter @georgeciveris.