the observatory

The value of skepticism

Why science reporters should question research
October 9, 2012

Skepticism has earned a bad name in recent years thanks to those who doubt the consensus that human industry is a significant driver of global climate change. But it’s important to remember that healthy skepticism is a key tenet of the scientific profession, and central to the quality control of research.

Two papers published in the PLOS family of journals in the last month offer reminders of why it is important that science journalists also maintain a healthy, rational sense of skepticism. Such wariness, they suggest, can protect against biases in scientific publishing and in the media.

For the first paper, published in PLOS Medicine, researchers looked at the scientific articles, press releases, and news items associated with 41 clinical trials—the so-called gold standard for evaluating new treatments. They found that instances of “spin” in the press releases and news items corresponded strongly to the presence of spin in the abstracts, or summations, of the scientific articles.

The paper’s authors defined spin as “specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment.” It can result from a variety of scientific errors, from “inadequate interpretation of results” to “inappropriate extrapolation,” but abstracts containing spin almost uniformly bequeathed it to the press releases and new items that followed, while spin-free abstracts begat mostly untainted press releases and news items.

As an editor’s note attached to the paper indicated, spin can hinder scientists from developing effective therapies, “and when reproduced in the mass media, it can give patients unrealistic expectation about new treatments.” So, it is important to understand where spin occurs and where it comes from. The authors of the PLOS paper said their work raised questions about the quality of the peer-review process for vetting studies, and highlighted the responsibility that journal reviewers and editors have to “to ensure that the conclusions reported are an appropriate reflection of the trial findings and do not overinterpret or misinterpret the results.”

The authors’ remarks are by no means exculpatory for journalists, however. On the contrary, they underscore the perils of reporting that takes conclusions at face value, doesn’t dig deeper than the abstract, and doesn’t seek independent, external validation of research findings—reporting that isn’t appropriately skeptical.

Sign up for CJR's daily email

A second paper, published in PLOS ONE, also attested to the need for more caution. The researchers behind that study identified ten of the most widely covered scientific articles about attention deficit hyperactivity disorder (ADHD) published during the 1990s, and all of the relevant follow-up studies until 2011. Then they looked at whether the findings reported in each “top 10” publication were consistent with the findings of the subsequent studies, and compared the amount of media attention the follow-up studies received to the amount the “top 10” received.

The results showed that seven of the “top 10” publications were initial studies of treatments and “the conclusions in six of them were either refuted or strongly attenuated” by the subsequent studies. “The seventh was not confirmed or refuted, but its main conclusion appears unlikely,” according to the paper’s authors, and “among the three ‘top 10’ that were not initial studies, two were confirmed subsequently and the third was attenuated.”

This isn’t unusual, the paper noted. Previous research has shown “that initial observations showing a positive effect are much more often published than those reporting no effect. As a consequence, initial observations are often refuted or attenuated by subsequent studies.” But those attenuating papers, which usually appear in less prestigious journals, don’t get nearly the volume of coverage as the earlier ones.

The “top 10” studies described in the PLOS paper resulted in 223 news articles, while 67 follow-up studies produced only 57 articles. “Indeed,” the authors wrote, “the subsequent scientific studies related to five ‘top 10’ publications received no media coverage at all.” Moreover, when a follow-up publication did draw media attention, the reporter usually failed to mention that its findings refuted those of an earlier publication. This is a problem, the authors noted, because:

Biomedical findings slowly mature from initial uncertain observations to facts validated by subsequent independent studies. Therefore, high quality media reporting of biomedical issues should consider a body of scientific studies over time, rather than merely initial publications.

That’s good advice. Following it would help reporters “reflect the evolution of scientific knowledge” described in the second PLOS paper, and avoid echoing the spin described in the first. As The Economist argued in an article about the PLOS paper titled, “Journalistic deficit disorder”:

A sensible prescription is hard. The matter goes beyond simply not believing what you read in the newspapers. Rather, it is a question of remembering that if you do not read subsequent confirmation, then the original conclusion may have fallen by the wayside.

As both PLOS papers made clear, there are things that journal reviewers and editors can do to guard against spin and indicate when and where new findings refute or attenuate earlier conclusions—and both papers have their own limitations, having to do mostly with the fairly narrow scope of each. But there’s no doubt that “single study syndrome,” as The New York Times’s Andrew Revkin calls it, is one of the most vexing disorders in science journalism. While it’s hard to talk about a cure, skeptical coverage that gives greater context to the latest research can do a lot to alleviate the symptoms.

Curtis Brainard writes on science and environment reporting. Follow him on Twitter @cbrainard.