The extraordinary case of academic fraudster Diederick Stapel followed the typical narrative of a scientific scandal.
A professor of social psychology at Tilburg University, he became a star researcher in his native Netherlands and abroad after years of eye-catching experiments on human behavior, such as a 2011 study published in Science that found a rubbish-strewn environment brought out racist behaviors in people.
But in October 2011, after Stapel’s colleagues and graduate students told university authorities that they suspected he was making-up results, an initial investigation revealed that he had committed substantial research fraud since at least 2004 in what currently stands at more than 50 of his papers. The university suspended Stapel, but keeping with the standard scandal narrative, a final report on the affair, released last fall, wrote him off as a lone careerist, although it did note that some of his co-authors should have been more critical. Unfortunately, this tendency to treat misconduct as isolated behavior is all too common.
The historian of science Marcel LaFollette has noted that initially, when faced with scientific impropriety, scientists around the world tend to present a variation on this enduring storyline. As she explained in a 2000 article for Experimental Biology and Medicine, “They have characterized the offender as aberrant, argued that the episode is isolated, or attempted to explain it as caused by stress, bad judgment, or moral corruption (or all three),”
However, as Yudhijit Bhattacharjee noted in an unflinching 6,400-word examination of the Stapel affair in The New York Times Magazine in April, Stapel’s exceptional case needs to be considered against a background where “at the very least … the number of bad actors in science isn’t as insignificant as many would like to believe.”
Science reporters now have a significant role in investigating scientific misconduct. Until the 1970s, LaFollette noted, cases of scientific misconduct were resolved quietly within a laboratory or an institution. The first case to get significant public attention, she wrote, was the 1974 case of William T. Summerlin, who fraudulently claimed to have transplanted skin between mice. But when general and science reporters focused increasingly on cases of fraud, political attention followed and scientific misconduct became a public policy issue.
Today, science journalists can continue to perform this important function, but should go beyond the high-profile scandals to reveal an under-reported aspect of contemporary research—the low-level misconduct that corrodes the scientific enterprise.
A 2009 PloS One study by Dr. Daniele Fanelli, a researcher at the University of Edinburgh who studies bias and misconduct in science, found that two percent of scientists, on average, admitted to at least one incident of serious misconduct, such as fabrication, falsification or modification of data—all of which distort scientific knowledge. When talking about their colleagues’ behavior, 14 percent said they observed serious scientific misconduct.
In this climate, there are several ways that science journalists can enhance their reporting, and consequently, public understanding, of misconduct. The first step is to make this problematic aspect of the scientific culture explicit, as Carl Zimmer did in his detailed examination of how pressures to achieve and maintain success have fueled an increasing rate of scientific retractions, which ran last year in The New York Times.
Another way to do this is for reporters to cover retractions, the public admissions by journals that studies they printed should never have been published, most often because of deliberate deceit or honest mistakes. This is routine practice at Reuters Health, for example, according to Ivan Oransky, its executive editor. Moreover, he said that if a reporter covered a paper that was retracted, they would update their earlier report. “When we do it on a five or six year old study, we can look a little silly,” he said. “But we’ll take a little looking silly, if it corrects the record.”
Oransky is also co-founder, with Adam Marcus, of Retraction Watch, which since 2010 has chronicled the steady stream of retractions from scientific journals. Oransky hopes that the site not only documents these admissions of error or misconduct, but also has an agenda-setting role in a new science journalism ecosystem, serving as a source for stories upon which other journalists follow up.
At another level, journalists can report on the sociological aspects of science that contribute to misconduct. “A simple thing to remember is that scientists are human,” said Oransky. But he said the way some science stories are written drains this human element out of the process. Missing are descriptions of what it “takes to publish a paper, the pressure to cut corners, the competition.” Oransky noted that by “focusing on the outlier, such as Stapel, one forgets that there are forces that can warp or skew the work of even the best-intentioned of scientists.”
To help understand these social and cultural forces, reporters can read the work of prominent researchers who have highlighted problematic features of science. Essential readings include the provocative 2005 PloS Medicine essay by the Stanford medical professor John Ioannidis, “Why Most Published Research Findings are False,” that examined the consequences of scientific bias. Another crucial text is the 2011 editorial in Infection and Immunity by University of Washington medical professor Ferric Fang that called for methodological and cultural reforms for a scientific community that is showing “signs of dysfunction,” including a winner-takes-all culture, where researchers race to publish in the most prominent journals and compete for grant funding.
At the same time, journalists can offer some proportion about the scale of misconduct. Fanelli noted that the rising rate of retractions is usually the only evidence presented to demonstrate increased levels of scientific misconduct. But he said: “There was no culture of retracting papers until recently. Too often, the high rate of retractions is taken to point to a problem. That is a mistake.” He said that while most retractions currently occur because a journal has been alerted to or discovered some form of misconduct, retractions can be viewed more positively, as an example of scientists voluntarily cleaning-up the scientific record. Science, said Fanelli, “would also benefit from having more researchers retracting their mistakes spontaneously, and more journals ensuring efficient retractions.”
And others caution against generalizing too much from particular high-profile cases. “It is important to not judge the scientific discipline where these cases occur because of the misguided actions of a few individuals,” Jeff Spies, Co-Director of the Center for Open Science, which aims in part to make science more transparent, wrote in an email. “If the disciplines to be judged, it should be by how the community responds. I would like to see a focus on the very positive side of these cases, and that is how the scientific community comes together to address the underlying issues rather than hiding or ignoring them.” (For a well-reasoned explanation of how psychology is correcting itself, see this New Yorker piece by Gary Marcus headlined, “The Crisis in Social Psychology That Isn’t.”)
Particular scientific scandals make compelling stories. But reporters can paint a bigger picture of the scientific enterprise, revealing, rather than obscuring, the social environment in which scientists work. It is a culture in which, wrote Fang, to “be successful, today’s scientists must often be self-promoting entrepreneurs whose work is driven not only by curiosity but by personal ambition, political concerns, and quests for funding.” To fully comprehend the Stapel case, it is a culture readers need to understand.