I called Dr. Julie Parsonnet, an epidemiologist at Stanford University who I had met at a conference in California recently, and asked for her take on Taubes’ article. She agreed that despite a “veneer of negativity,” his work mostly treats the distinction between epidemiological and clinical trial research fairly. Parsonnet emphasizes, though, that the latter can often be as problematic and unreliable as the former. “All research has to be looked at in the context of everything else that’s known about the subject,” she told me. Taubes, to his credit, predicts in his piece that epidemiologists like Parsonnet “will argue that they are never relying on any single study,” and that “this in turn leads to the argument that the fault is with the press, not the epidemiology.”


This is an astute and incredibly important observation. Epidemiological studies may be inferior to clinical trials at producing conclusive answers to some medical quandaries, but the real problem is that most people, including many journalists who write about this stuff, do not know the key differences between both types of research.


Matthew Nisbet, an assistant professor of communication at American University, who runs a blog about how journalists and others frame science, criticizes Taubes for not making enough of this angle and leaving the impression that “science can’t be trusted.” Monday, on his blog, Nisbet wrote that readers need epidemiology articles that are more like “a detective story hung around just how amazingly complex it is to figure out the linkages between diet, drug therapies, and human health.” Indeed, there are a few excellent of examples of such work, including one about a potential cancer cluster by Chris Bowman at The Sacramento Bee.


But where I agree that Taubes’ article tended to be couched in an unnecessarily negative tone, I also believe his approach was valid. As Nisbet himself asks, “Is it really ‘bad science’ or is it bad communication?” Regardless of whether a reporter seeks to discuss epidemiology generally, like Taubes and Von Bubnoff, or specifically, like Bowman, there is the original dilemma that most readers do not know the basic differences between observational studies and clinical trials. With this in mind, the generalized approach that Taubes took seems all the more useful.


“The fundamental problem is not necessarily reconcilable here,” Parsonnet told me, “because people have an innate desire to protect their health and the press has an innate desire to provide interesting information to sell newspapers.” As long as that is the case, there will continue to be three basic types of epidemiological journalism: that which typically finds its way into papers and magazines, heralding the latest research; that which, like Bowman’s piece in the Bee, relies on its own investigations; and that which, like Taubes’ and Von Bubnoff’s work, takes the wide-angle, explanatory approach.


With all three, however, the challenge is the same: journalists must explain that epidemiology is probabilistic, rather than absolute; that it is about chance, not certainty. With every story, reporters must precisely describe the likely consequence of any action -doubling or halving the risk of heart disease, for example. They must describe any internal factors that affect confidence in the study - the bigger the population and the longer the period of time examined, the better. And they must describe any external factors that affect confidence in the study - that is to say, the number and strength of supporting or competing hypotheses.

If you'd like to help CJR and win a chance at one of 10 free print subscriptions, take a brief survey for us here.

Curtis Brainard is the editor of The Observatory, CJR's online critique of science and environment reporting. Follow him on Twitter @cbrainard.