Duck and cover

After Ricky Gervais and now the bikini and sensational headlines, may I please request a coverless subscription? Seriously, unless your pop culture cover experiment is over, please send the next issue without the hideous cover.

Helen Gallagher
Glenview, IL

Thanks, David H. Freedman. I enjoyed your article (“Survival of the Wrongest,” CJR, January/February). You write, “Look at the preponderance of evidence, and apply common sense liberally.” I would add that science/health writers should be research-literate, able to understand what is and is not a well-designed study.

The issue of blending personal experience with science and reporting on the combination, which he raised at the beginning of the article, is one I think is worthy of further conversation. Reporters, like bloggers, are sometimes trying to make sense of their own experience in light of “what science says.” Being very careful about examining and explaining one’s dual motives to “tell what’s true for you” and “report what’s true” seem essential to writing fairly about health and science.

Jess Williams
Pittsburgh, PA

Measurement errors and confounders usually cannot be avoided in population-based research, but they can be minimized. The scientists and students I know apply a lot of effort toward minimizing errors and implementing controls to track confounders in their studies. Freedman seems to overlook those efforts and creates an impression that medical scientists conduct research however way they want. Like any human endeavors, science is limited by methods currently available. The key is to stay critical when interpreting research findings and keep the limitations of their methodologies in mind. Journalists probably do not have time to go through this critical process, and discussion about the limitations of health studies may not be interesting to the general public, either. Maybe increasing the public’s awareness of science and scientific methodologies can prevent readers from being misled.

Qing Peng
Ann Arbor, MI

Reading Freedman’s table-setting article and the four pieces that followed it was a positively Orwellian experience, beginning with his statement that fully two-thirds of published scientific research findings are wrong. So the intrepid cjr team encounters a man at the border of a country who warns, “All the people in my country are wrong two-thirds of the time.” Then it roams around that country with notebook and camera without ever again addressing that warning and its obvious implications.

The lively science journalist for HuffPo is profiled but we never learn how—or whether—she successfully navigates the minefield of predominantly inaccurate scientific research. Media coverage of tainted food is condemned despite an acknowledgement of the lack of reliable scientific findings linking pathogens and food sources—two-thirds of which would, apparently, be wrong anyway. A photographer’s project on hydrofracking is presented without a mention of the disputed research findings in that battlefield of science. The final step into the Twilight Zone came in Freedman’s bio box, in which he admits that “he has been guilty of all the failures of health journalism he describes in this article.” Really? REALLY? And was the article I just read a scene of his crimes?
 On that front, he helpfully advises: “Of course, I quote studies throughout this article to support my own assertions, including studies on the wrongness of other studies. Should these studies be trusted? Good luck in sorting that out! My advice: Look at the preponderance of evidence, and apply common sense liberally.” And where would we, his readers, find the “evidence” by which we can sort out whether his “evidence” should be trusted? How will we know which one-third of it is correct? If common sense is so helpful, why bother with unreliable scientific research at all? But didn’t people once argue that common sense proved the world was flat because we don’t all fall off the planet? How is “common sense” different from the “conventional wisdom” we all know we should question because it is, demonstrably, so frequently wrong?

After reading all five articles, I was swept away by the frustration and futility of reading, much less writing, about scientific research at all. If that was the intended result, bravo! Mission accomplished.

Diana B. Henriques
Hoboken, NJ

The Editors