For journalists, some scientific flops are just too good to pass up. Think of the Large Hadron Collider’s failure to launch last year. Efforts to get the inglorious atom smasher up and running are still drawing headlines, and if the heralded machine fails to locate the Higgs boson or some other important particle, it will surely be news. But what of science’s less prestigious, day-to-day setbacks?
In May, Nature carried an interesting article about recent calls in the journals Restoration Ecology and Conservation Biology for scientists to publish negative results from experiments. “Failed experiments are hard, if not impossible, to publish and hence do not contribute to any of the normal methods of earning academic brownie points,” Richard Hobbs, the editor-in-chief of the former, wrote in an editorial last January. “And yet, failure is often a necessary basis for subsequent success.”
For that reason, Hobbs announced, he would begin accepting submissions for a “new category of paper” in Restoration Ecology called “Set-backs and Surprises.” The first entries are now in the final stages of editing and will be published later this year, Hobbs said in a recent interview. He isn’t sure if his own experiment will work, and the journal’s editorial board is split over whether or not publishing negative results is a good idea, but it seems like a promising concept that could improve the scientific process. Moreover, it could improve science journalism. Reading about unsuccessful experiments could give reporters a better appreciation of the scientific method and help them identify and explore the frontiers of scientific knowledge.
Restoration ecology and conservation biology are good fields in which to test this theory because their objectives—improving the pH balance of salt marsh or improving the fecundity of endangered owls, for example—are fairly clear cut compared to other scientific endeavors. Plus, “there is a vast difference between being successful and being effective, particularly in a mission-driven discipline such as conservation biology,” reasoned Andrew Knight, a senior lecturer at Stellenbosch University in South Africa, in a recent letter to the eponymous journal. “In reality, a significant number of the initiatives attempting to translate conservation science into activities that actually ensure the persistence of species, habitats, and the ecological processes that sustain them are par, or complete, failure.”
Understanding where researchers are stumbling in the pursuit of such goals would be very useful to journalists. From the public-service perspective, it would help them produce stories highlighting problems that might benefit from a more focused application of resources and suggesting how to avoid similar mistakes in the future. “It might [also] demystify health and science a bit,” Knight said in a recent interview. In other words, by covering the puzzles with which the science community is struggling , journalists would have more opportunities to explain the work the scientists do—and how they do it—in addition to the results they produce.
Covering scientific misfires isn’t new, of course. “To some extent I would say reporters already do cover failures a lot,” Daniel Cressey, who wrote the Nature article about the calls for publishing negative results, said in a recent interview. “But they have to be high profile, spectacular or otherwise interesting failures.” Short-circuiting in the Large Hadron Collider is a prime example. When science is covered in this fashion, however, readers are often left with the impression that research has only one of two outcomes: unparalleled progress or dead-end money drain. Anyone who understands science knows that that isn’t right.
But reporting failures won’t be easy, and journalists will need to take care in presenting such stories. In fact, a number of scientists are reluctant to even use the word failure. Joy Zedler, an ecologist who has surveyed the use of words like “success” and “failure” in ecological restoration studies, has argued for striking such terms from scientific vocabulary, believing that they misrepresent the scientific process. (In his editorial for Restoration Ecology, Hobbs also cautioned that success and failure are “relative terms.”) And Knight raises another concern: if journalists start reporting failures, it may strain the press’ relationship with the scientific community.There’s a chance that researchers might be less inclined to speak to reporters, especially if news articles that include failures are received poorly by news consumers, or even journalists themselves.
Case in point: in June, Nicholas Wade wrote a post at The New York Times’s Tierney Lab blog about a recent development in schizophrenia research. The news, that schizophrenia is actually caused by many random genes rather than a specific handful, is a large departure from the previous school of thought. Wade’s reaction to the research, however, was antithetical to the optimistic press releases that were sent to media outlets: “It seems to me the reports represent more of a historic defeat, a Pearl Harbor of schizophrenia research,” he wrote.