Twitter now boasts 140 million active users, many of whom have used the social messaging service in the last two weeks to share instant results from the Olympics, glimpse behind-the-scenes moments with the athletes, and voice frustrations at mainstream media coverage, on-site organization, and the ticketing process. We’re in apparently in the middle of the first “Twitter Olympics,” which will surely precede the first “Twitter election” this fall. According to Twitter’s company statistics, the tweet volume on election day 2008 represents only about six minutes of tweets today.
But the ever-increasing reach of the social messaging service has brought with it a spike in collisions with the law on both sides of the Atlantic, raising questions about who is responsible for policing Twitter. There are two candidates for the job: Twitter and law enforcers like the police.
The problems that come from Twitter policing its users were highlighted just last week, when Twitter alerted NBC to a British journalist’s critical tweets, and advised the network on how to file a request to suspend the user. Guy Adams, a correspondent for UK newspaper The Independent, was suspended after he tweeted the corporate email address of Gary Zenkel, the president of NBC Olympics, and advised his followers to contact Zenkel with their views. Twitter later apologized for the conflict of interest.
Normally Twitter is protective of its users’ freedom of speech, as shown yesterday when the company was forced by New York Police Department investigators to reveal the identity of an anonymous user who had posted tweets threatening to stage a shooting similar to the one in Aurora, CO, at the opening night of Mike Tyson’s Broadway show. Twitter had denied the request three days earlier, stating that the threat was not “present, specific and immediate” enough to warrant disclosure.
In Britain, which has tougher privacy laws than the US, police have made multiple Twitter-related arrests recently under the Communications Act of 2003 (before Twitter existed). The act prohibits the sending “by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character.”
Two weeks before the Broadway shooting threat, one Twitter user in the UK, Paul Chambers, was cleared of hate speech two years after he sent a frustrated tweet stating he would blow a UK airport “sky high” because it was closed on the day he was supposed to pick up his girlfriend. Chambers appealed against his prosecution under the 2003 Communications Act in what became known as the “Twitter joke case.” The overturned verdict was seen as a victory for freedom of speech.
In March, another UK man was imprisoned for 56 days after posting racist tweets about the soccer player Fabrice Muamba. Liam Stacey, who admitted racist intent, has his sentence upheld on appeal. The judge at the appeal hearing, Justice Wyn Williams, remarked that the case was unprecedented. “There are no applicable sentencing guidelines,” he said.
After yet another case involving the arrest of a 17-year-old boy on July 31 on suspicions that he sent malicious tweets about the British Olympic diver Tom Daley, a spokesman from Association of Chief Police Officers rejected the idea that new laws are needed to specifically address the problem of abusive tweets.
Stuart Hyde, who speaks on e-crime for the Association of Chief Police Officers, told BBC Radio 4’s Today Programme that police must take a “common sense” approach to pursuing Twitter-based complaints. When asked if more legislation was necessary, Hyde cited the Communications Acts of 1998 and 2003. “It works reasonably well most of the time,” he said.
Hyde called for Twitter to put safeguards in place against racist and malicious behavior.
“I think there is a case that if you are going to run it as a commercial organization, then you have got to allow people to use it safely and securely, and have the processes in place where people are acting in a strange way—and the word troll comes to mind—then you get them off as quickly as possible,” he said.
Rachel Bremer, European Communications Manager for Twitter, said that Twitter does not mediate content, including potentially offensive content, as a matter of policy.
“This means that users are allowed to post potentially inflammatory content, provided that they do not violate the Twitter terms of service and Twitter rules,” Bremer said.
Under the guidelines, Twitter bats responsibility back to the police, urging people feeling threatened by messages to contact their local law enforcement. “Websites do not have the ability to investigate and assess a threat, bring charges or prosecute individuals,” the policy states.
Bremer refused comment on whether Twitter was looking at changing their policies to shoulder more of the rule-making in light of recent UK court cases.