Tow Center

With a shrinking user base and executive exits, a watershed moment for Twitter

August 8, 2016
Image: AP

With the announcement last week that Vice President of Communication Natalie Kerris is leaving Twitter after only six months on the job, the company’s public relations troubles are in the news again. But does Twitter have a PR problem, or a problem with how the site itself is working?

While the reason for Kerris’s departure remains unclear, her exit comes after a season of reports of falling stock and a shrinking user base, despite a concerted effort at a better communications strategy. Over the past year, they’ve had multiple executive changes and new policies. In June of 2015, Twitter shareholder Chris Sacca wrote that “Twitter has failed to tell its own story.” A month later, then communications-head Gabriel Stricker was ousted. In February, Kerris entered, and Twitter started its Trust and Safety Council, an advisory group, now comprised of about 60 nonprofit organizations, aimed at developing Twitter policies around online harassment.

Users have also been vocal about the abuse they must tolerate on the social platform. Journalists like Jon Weisman at The New York Times left the platform because of the anti-Semitic language directed at him. Jessica Valenti called it quits just last weekend after threats were made against her daughter. The most high-profile harassment case involved actress Leslie Jones, who threatened to leave Twitter because of the abuse she received from a Twitter mob incited by alt-right Breitbart tech editor Milo Yiannopoulos.

Yiannopoulos was permanently banned from Twitter in July, following the Jones incident, but questions were raised about why it took Twitter CEO Jack Dorsey’s involvement to trigger the ban. On a recent episode of the podcast Rocket, Christina Warren says: “What bothers me about this is not the response….but that it took a famous person.”

Video game developer Brianna Wu, a frequent recipient of horrific harassment and one of the targets of Gamergate, adds that there are people at Twitter working on the issue of online abuse who “care very deeply”—and some recent changes have been made on the platform. “A few years ago, Twitter knew absolutely nothing about transgender people,” she says, citing one example of how the company has progressed. “I’ve seen Twitter talk to people in the transgender community, learn about policies, update them, and change the results of reporting.” But while progress is being made, she says, “there’s so much that Twitter just doesn’t do.”

Twitter is beginning to stand out among social platforms, too, as not doing enough to combat online harassment. Just last week, Instagram released a tool allowing users to have greater control over their comment sections. Twitter has also been criticized for focusing more on its new buttons and features than making the platform feel like a safe space.

Sign up for CJR's daily email

So has Twitter “failed at telling its story”—and does it need to get better at communicating what is working—or has it failed to prioritize safety enough?

 

How content moderation works

Many complaints against Twitter have been made on the grounds of inconsistency; even if Twitter has rules in place against threats, these rules are not enforced across the board.

Twitter does not make public how many abuse reports it receives, how many people work on moderation team, or how its moderation is structured. (At the time of publication, Twitter has not responded to requests for comment.)

Usually, human content moderation works in conjunction with automated moderation. Most platforms rely on users to flag content to trigger moderation—in other words, platforms don’t police content before someone flags it as offensive. With platforms having to monitor such volumes of content, automation has to play a central role. But it is unlikely that any platform will ever rely on 100 percent automated moderation. Humans are just better at interpreting context, and total automation poses the risk of mistakenly censoring content that is in the public interest.

 

Platforms don’t police content before someone flags it as offensive.

Hany Farid, a computer scientist at Dartmouth, is one of the original developers of PhotoDNA, a tool that allows companies to recognize and remove problematic images from their networks. It works by identifying the digital signature of images that are flagged by users and determined to be graphic. “Once identified,” Farid says, “this content can be quickly, accurately, and automatically detected, thus preventing any future upload. Twitter, Facebook, Google, Microsoft, etc. are all using this technology.” The tool was originally intended for removing child pornography and preventing its repeated dissemination, but Farid says it can be extended to “any content deemed in violation of the terms of service of a social media platform.”

While PhotoDNA has recently been extended to video and audio (in collaboration with the Counter Extremism Project), live video (such as Facebook Live) still poses a major challenge: “In my opinion, we do not have a fast, accurate, and automatic technology for reviewing live video for inappropriate content,” Farid says. “This is an incredibly difficult problem and one that is going to require much more research.”

This kind of automation is only one way of moderating content. Twitter is testing a new method of moderation on Periscope (which is owned by Twitter) that operates, in effect, like community policing. Here, the tried-and-true method of user flagging is taken one step further; when content is flagged, it’s then posed to other users, who are asked to vote on whether they find the content offensive.

The problem with flagging, write Kate Crawford and Tarton Gillespie in an article on the topic, is that flags carry little to no information about context—and that context must be assessed by others. The side effect of asking for more human moderation, of course, is that more people then have to look at potentially offensive content. The news industry has become increasingly concerned about trauma among journalists who are required to look at such images in their work—we should have a similar concern for moderators looking at the worst of what finds it way online. A couple of pieces on the work of content moderation have highlighted this human toll. Adrian Chen’s 2014 piece in The Atlantic suggests it is a larger industry than many of us realize, with possibly over 100,000 laborers.

 

Assessing Twitter’s content moderation

Some of the best information we have on Twitter’s responses to abuse reports comes from a study written by J. Nathan Matias and others on data collected by a nonprofit group called Women, Action, & the Media. WAM held an “authorized reporter” status with Twitter, allowing the organization a special platform by which “to identify and report inappropriate content on behalf of others.”

WAM, in turn, set up a way for people to report to it directly on its website (which Twitter users might do instead of or in addition to reporting to Twitter). While WAM’s sample size is considerably smaller and more specific than the abuse reports that all of Twitter probably sees, and it had a privileged status in recommending that certain posts be taken down, it also had the advantage that Twitter would report back to WAM on how it responded to WAM’s reports. In this way, WAM was able to discern something about which types of reports Twitter responds to, as well as how it responds.

CJR posted a summary of some of WAM’s numbers last year. One of the most interesting findings is that most harassment was reported by bystanders, not the target of harassment.

The report also says that Twitter did, in fact, take action on WAM’s requests for removal more often than not, but that Twitter’s response was tied to the type of harassment flagged: “In cases reported to involve threats of violence, Twitter took action to suspend or warn accounts in 16 cases, over three times more often than they declined (5). Twitter took action in more cases of hate speech (30) than they declined (20).” However, WAM also found that Twitter had a low probability of taking down content related to doxxing that it reported—the sharing of private information. And, importantly, WAM “did not find any relationship between follower counts, account age, and Twitter’s likelihood to take action.”

WAM also found that “ongoing harassment was a concern in 29% of reports”—a troubling statistic, and one that suggests online abuse can’t be solved by flagging alone. WAM’s recommendations include asking Twitter to clearly define harassment, expand the ability of users to filter out unwanted messages, and diversify the company’s leadership.  

 

Making the system better

Content moderation is mostly a reactionary process, but there are measures that Twitter and other platforms can take to prevent certain types of harassment before they happen, as well as give users the tools to protect themselves.

There’s a distinction between harassment and abuse, says David Riordan, CTIO at the Brown Institute for Media Innovation. Companies like Twitter are very good at anticipating and identifying abuse of their own systems—and they should get better at doing the same for abuse of users. Just as companies assess weakness in their systems for potential sites of abuse, they should anticipate potential sites for harassment. Such protocol should be “in their playbook,” Riordan adds.

 

There are measures Twitter can take to prevent harassment and give users the tools to protect themselves.

On the other hand, companies have to be careful not to make the field too competitive. “The challenge is dealing with technical strategies for abuse and harassment so that it doesn’t trigger the arms race,” says Riordan. If Twitter and Facebook start competing over content moderation, the up-front cost of moderation could rise, which could discourage other platforms from implementing their own systems. This is not something we want, especially in the realm of threats and hateful speech.

In the meantime, there are some stopgap fixes. Users can now share block lists, and Twitter has developed tools that make it easier to block large swaths of people at once.

Ben Smith and his BuzzFeed team have advocated for extending the First Amendment to social platforms. They suggested something called “shadowbanning”—where the user cannot tell he or she has been banned. Such a measure would make banning safer for Twitter users and less satisfying for those who are banned.

 

Twitter is at a watershed moment. And to be sure, part of the problem it is facing is with the public perception. Users have the impression that Twitter is a public marketplace, and that a certain amount of hateful language is part and parcel with freedom of expression. On the other hand, since Twitter is not a common carrier, like an internet or phone provider, it is not governed by the First Amendment.

Claire Wardle, research director of the Tow Center for Digital Journalism, says Twitter needs to move toward more automation but also more education of users. Twitter should make it clear to users that the technology it uses to handle harassment isn’t perfect—that it can’t recognize context, can’t deal with consent or dignity (as in the case of Philando Castile, whose death was streamed live on Facebook by his girlfriend), and can’t recognize public interest. “The challenges are the technology,” says Wardle, “but also the subtleties of these questions around free speech.”

More clarity and transparency on its policies would help, as would more detail about the abuse reports it receives. But the problem is systemic, and it can’t be fixed by better communications alone. As Matias writes in a clever essay using the Victorian response to tainted food as a model for how we should think about online harassment:

Today when industrial-scale food adulteration occurs, such as the 85,000 tonnes of olives treated with copper sulphate that Italian police seized in February, we expect a systemic response. Creating systems of public safety that also worked for industry was a complex endeavour that unfolded over generations, through the collective efforts of scientists, advocates, industry groups and governments.

Wardle says Twitter and others haven’t gone far enough in thinking about moderation mostly because “for a long time, platforms focused on being communication technologies and they didn’t want to be involved in editorial.” Admitting that they have some role in editorial would mean “huge resourcing” in order to develop these tools.

But at this point, it’s been a year since Twitter first announced a strategic shift in 2015, and users are still leaving the platform, predominantly women. Many people, especially journalists and those in the tech industry, have to be on Twitter for work—so for them there is no escaping the abuse. Twitter’s problem is not just with its “story”—it’s with its reality.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »