Alt-right suspensions lay bare Twitter’s consistency problem

Image: AP

Twitter suspended a number of accounts associated with the alt-right, USA Today reported this morning. This move was bound to be divisive: While Twitter has banned and suspended users in the past (prominently, Milo Yiannopoulos for incitement), USA Today points out the company has never suspended so many at once—at least seven in this case. Richard Spencer, one of the suspended users and prominent alt-righter, also had a verified account on Twitter. He claims, “I, and a number of other people who have just got banned, weren’t even trolling.”

If this is true, it would be a powerful political statement, indeed. As David Frum notes in The Atlantic, “These suspensions seem motivated entirely by viewpoint, not by behavior.” Frum goes on to argue that a kingpin strategy on Twitter’s part will only strengthen the alt-right’s audience. But we may never know Twitter’s reasoning for suspending the accounts. Twitter declined to comment on its moves, citing privacy and security reasons.

Spencer and Jared Taylor, founder of the far-right magazine American Renaissance, released a joint statement calling for solidarity from journalists in favor of free expression:

But in the space of a few sentences, the message turned to a threat: “If they fail to do so, we must assume that they support the suppression of certain political views, and we will not grant them interviews or access to our events.” This is a complicated rhetorical maneuver: It finds a reason for the alt-right to exclude the rest of the press for not speaking up for them. Damned if you do, etc.

We cannot assess whether Twitter’s move to suspend the alt-righters was in fact politically motivated, but it does fit into a pattern of ongoing complaints against Twitter about the service’s lack of consistency in enforcing guidelines.

Related: How the ‘alt-right’ checkmated the media

Twitter has been the battleground for free speech for months, under fire from upset users who feel harassed on the platform and unsupported by the slow, patchy response to their complaints. I’ve written previously about the difficulties and concerns around moderation for Twitter. At the beginning of this year, the company introduced a Trust and Safety Council made up of nonprofits to recommend how to police the site. Twitter has been making incremental progress ever since—including improvements yesterday to make the platform a more comfortable space, such as the ability to mute conversations so users do not continually receive notifications of hateful speech against them or others.

But the small changes to the Twitter interface belie a larger problem: The public is invested in Twitter as a public sphere. Twitter, like Facebook, is a private company and therefore not technically ruled by the First Amendment, but it is commonly understood as giving voice, as Zuckerberg himself claimed about Facebook, to more people than ever before. Its users are vigilant about keeping Twitter both an open space and a safe space, but the service needs a transparent infrastructure for that: a way to reliably report users, not just mute them, and faith in Twitter’s judgment about who to suspend and when.

So in some sense, it does not matter whether Twitter had legitimate reasons for suspending all the accounts at once; if it is never transparent and specific about why it suspends or bans, we will not trust its decisions. Until the company takes a consistent approach in its enforcement, Twitter will be under fire from right, left, and the Fourth Estate alike.

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Nausicaa Renner is editor of the Tow Center for Digital Journalism's vertical at Columbia Journalism Review. She tweets at @nausjcaa.