currents

Louts Out

How to police message boards and comments
October 30, 2008

In January, someone who goes by the name “crosswave” logged onto the reader forums at nydailynews.com and posted a comment about sports columnist Filip Bondy. “Eff 
you Filip Bondy,” the post read, “You should be banished back to covering ghetto futbol 
in Newark.” In March, another sports columnist, Terence Moore of The Atlanta 
Journal-Constitution, was labeled “racist” by a handful of readers on ajc.com. “Mr. Moore can actually make the Klan look reasonably intelligent by comparison,” wrote one user, who identified himself as “Salad Tosser.”

Personal attacks and off-topic rants are nothing new to newspaper Web sites. Back in 2005, the Ventura County Star temporarily disabled comments on its site after the tone turned vicious; in 2006, The Washington Post suspended comments on one of its blogs because they had become obscene. But as newspapers try to boost traffic and revenue on their Web sites by granting readers more ways to weigh in, abusive comments have flourished. Editors have an arsenal of technological tools at their disposal, such as mandatory registration, word filters, “report abuse” buttons, and even the sly “Bozo filter,” which gives blacklisted users the false impression that their comments are being posted, when in fact nobody else can read them. But software can only do so much. “The minute you put a filter in place, your trolls find a way past it,” says Yvonne Beasley, the home-page editor of The Des Moines Register’s Web site.

The question of how to balance openness and interactivity with the desire for civil debate is more an ethical question than a legal one, in light of the fact that the Federal Communications Decency Act grants Web sites immunity from defamation suits arising from user-generated content. But there is concern that derogatory, obscene, threatening, or libelous user comments could damage a newspaper’s brand or alienate readers—to say nothing of the anger that reporters and columnists feel when the comments attached to their work turn abusive.

Strategies for how to handle offensive online comments vary. Most sites have developed explicit policies, but the only way to enforce them effectively is to review all comments before they are posted. That’s what The New York Times does, but it is a labor-intensive approach that isn’t feasible for most newspapers. A more common strategy is to wait until there is a complaint about a comment before considering whether it should be removed. Others actively patrol their sites for nasty stuff, engaging in a perpetual game of whack-a-mole.

One novel strategy that is gaining ground takes a page from the playbook of social networking sites like Facebook. The idea is to create a semi-autonomous community where users mostly police themselves. “Your real challenge is that people coming to that site don’t necessarily have anything in common,” says Rich Gordon, an associate professor at Northwestern’s Medill School of Journalism and the author of The Online Community Cookbook. A handful of newspapers, including The Washington Post and USA Today, have managed to build that common ground by embedding elements of social networking in their Web sites. At washingtonpost.com, readers can create profiles, send private messages to other readers, add others as friends, and track their posts over time. At usatoday.com, Web editors post a rotating list of the best comments each day on the home page, as a way of recognizing reader contributions.

Ultimately, though, the real question may be whether moderating reader comments is even necessary. Many readers are accustomed to the provocative nature of online discussion. The editors at seattletimes.com tested this idea when they began considering opening the site to comments. “We found the community really doesn’t care,” says Robert Hernandez, senior producer for local news. “If they see a bad post, they skip right over it.”

Sign up for CJR's daily email
Adam Rose is a former CJR intern and a freelance writer based in New York City.