The internet has largely weighed in on Popular Science’s sudden decision to shut off its comments section earlier this week. The responses have been, for the most part, positive, as battle-weary writers leap to the defense of a publication looking to preserve the integrity of its work from biased trolls. (Except for Mathew Ingram, who devoted a PaidContent post and an extended Twitter beat-down to the new policy.)

But PopSci is justifying its new policy by arguing that silencing commenters preserves the integrity of science, as evidenced by a study published earlier this year in the Journal of Computer-Mediated Communication, part of a string of commenting-centric research. In the study, a team of researchers had a sample of just over 1,100 subjects read a blog post on nanotechnology, followed by a set of comments that were alternately laudatory or incendiary. The readers’ opinions of the merits of the technology, PopSci reports, were strongly affected by the negative comments, a result PopSci editor Suzanne LaBarre deems dangerous: “If you carry out those results to their logical end—commenters shape public opinion; public opinion shapes public policy; public policy shapes how and whether and what research gets funded—you start to see why we feel compelled to hit the ‘off’ switch.”

But the study doesn’t prove or demonstrate that comments have nearly the scope of influence that LaBarre attributes to them. The problem with her extrapolation of the study’s logic was summed up by Nature editor Noah Gray:

In a blog post, Marie-Claire Shanahan, the chairwoman of science education at the University of Calgary, agreed, commenting on the “large gap” between how PopSci characterizes the study results versus what the results actually say. Eighty-three percent of the influence of the piece was based on factors that had nothing to do with the comments, explains Shanahan, and the fact that “the civility of the comments had “NO SIGNIFICANT DIRECT EFFECT on readers’ perceptions of nanotechnology,” a fact left conveniently out of LaBarre’s analysis of the study.

LaBarre also failed to communicate the study’s main finding: That the comments had the most sway over people who were strongly pro or against uses of nanotechnology—not undecided readers. “So among those who already held strong views, the uncivil comments tended to polarize them a bit further,” writes Shanahan.

There’s also research suggesting that lively commenting can positively add to the debate, says Shanahan, citing her own research surveying comments left on health stories in The Globe and Mail. After Shanahan and a grad student completed the study they found—surprise—that the comments section did exactly what it was supposed to. “Extensive contributions were made by parents, patients and people with medical expertise. Questions were asked and clear thoughtful answers were often given,” she wrote. (PopSci doesn’t include Shanahan’s research in its policy defense, and as any competent science writer understands, two pieces of research does not consensus make.)

We’ve written often about the dangers of adding false balance to stories by lending uninformed sources equal weight when the research community has, more or less, reached consensus. Though the comments section doesn’t bear the same weight as what a writer includes in a piece, there’s reason to speculate that the comments section might have more influence on readers when they examine more familiar topics.

Alexis Sobel Fitts is an assistant editor at CJR. Follow her on Twitter at @fittsofalexis.