Last month, after Hurricane Sandy struck, I published a story about climate science. Divisive issues swirling around global warming tend to provoke readers and, in this case, spawned 47 comments.
“One storm and all the global climate gloom and doomers come crawling out of the woodwork,” Josh Irons groaned. “There’s been no change in 15 years—proven.” In response, Abobo offered: “*yawn*.”
One inane rift, apparently, was irresistible: “Climate change schmimate change. A colossal hoax.” I winced at its power to arrest further discussion, but only until ensuing responses grew into the longest and most substantive thread on the page, parsing the distinction between proof and evidence, inductive and deductive reasoning.
For all the fuss over comment sections being either plagued by cliques—cited in Gawker’s temporary decision earlier this year to disable comments—or soured by vulgarity, the discussion following my piece was balanced and civil, even harmonious.
The recipe at The Atlantic and across major online news platforms has been simple: moderate and rank posts, vet commenters, and design the forum with threading and sharing features that streamline the user experience. By tucking comment sections under the editorial tent, trashy discussion can be redeemed.
“Readers are part of the conversation, and they’re part of the content of the site,” said Bob Cohn, digital editor at The Atlantic. Sometimes, he added, “the comment thread is at least as illuminating as the underlying piece.”
Thoughtful readers deserve a decorous, accessible outlet to voice opinion, to debate, and to further report stories from their vantage point, which can even spur fresh coverage.
But readers aren’t journalists. Still, according to new research, the distinction may be blurring.
A study published last month in the Journal of Computer-Mediated Communication shows that the way certain readers see news stories can be distorted by the comments below them. Especially when someone cares deeply about the issue being covered, disagreeable commentary may stoke concern over media bias.
Eun-Ju Lee, the study’s author and a professor in the department of communication at Seoul National University, in South Korea, recruited 240 participants to read an online news article about corporal punishment in elementary schools. They answered a pre-test questionnaire that measured their emotional stake in the issue, their stance, and their communications with others who either agreed or disagreed. Then, at their leisure, they read the story, a neutral portrayal of Seoul’s education superintendent pushing to ban corporal punishment, balanced by equal arguments on both sides.
By random assignment, half the readers encountered a comment section largely in support of the ban, with posts like this one: “You keep talking about educational authority, but educational authority does not come from physical punishment.” The other half saw mostly opposition: “The superintendent has no idea what he’s doing. Gotta say, he’s lost touch with reality, and life is gonna be a living hell for teachers.”
As expected, a control group reading the article without comments found it impartial. So too did readers who, in the pre-test questionnaire, identified as being relatively unmoved by news concerning corporal punishment.
For those with a personal stake, however, comments clashing with their own opinions raised suspicion of media bias, a phenomenon known as the hostile media effect. The commentary colored their reading.
“User-generated commenting is a key characteristic of Web 2.0,” Lee said of the findings. “When coupled with the news article, it can bring changes to the readers’ interpretations of news and the reactions they exhibit.”
We know that when readers are impassioned by an issue, they have a nose for media influence—real or perceived. If their own views and those of commenters appear in discord, these readers become defensive, figuring that the coverage swayed the others and, therefore, must be biased.
But Lee also suspects that keeping news and comments in proximity, now standard practice, inclines readers to jumble the two, and to misremember authorship. “People might have difficulty distinguishing what they read from which source,” she said, noting another recent study in which participants exposed to dismal commentary judged the news outlet to be correspondingly poor.
A similar “assimilation bias” was uncovered in a pivotal 2008 study led by Joseph Walther, a professor at Michigan State who has written prolifically on cyber psychology. The findings showed that whether Facebook users were voted “hot or not” depended, in part, on their friends—how attractive they were and even what they posted. How participants judged the primary content (a Facebook user) was shaped by the context (Facebook friends and their posts).
“As Web sites become more interactive and participatory,” Walther and his co-authors concluded, “the question of textual authority becomes less clear.”
And some news sites are even going beyond integrating the prose of readers and writers.