Confounders Study subjects may lose weight on a certain diet, but was it because of the diet, or because of the support they got from doctors and others running the study? Or because they knew their habits and weight were being recorded? Or because they knew they could quit the diet when the study was over? So many factors affect every aspect of human health that it’s nearly impossible to tease them apart and see clearly the effect of changing any one of them.

Publication bias Research journals, like newsstand magazines, want exciting stories that will have impact on readers. That means they prefer studies that deliver the most interesting and important findings, such as that a new treatment works, or that a certain type of diet helps most people lose weight. If multiple research teams test a treatment, and all but one find the treatment doesn’t work, the journal might well be interested in publishing the one positive result, even though the most likely explanation for the oddball finding is that the researchers behind it made a mistake or perhaps fudged the data a bit. What’s more, since scientists’ careers depend on being published in prominent journals, and because there is intense competition to be published, scientists much prefer to come up with the exciting, important findings journals are looking for—even if it’s a wrong finding. Unfortunately, as Ioannidis and others have pointed out, the more exciting a finding, the more likely it is to be wrong. Typically, something is exciting specifically because it’s unexpected, and it’s unexpected typically because it’s less likely to occur. Thus, exciting findings are often unlikely findings, and unlikely findings are often unlikely for the simple reason that they’re wrong.

Ioannidis and others have noted that the supposed protection science offers to catch flawed findings—notably peer review and replication—is utterly ineffective at detecting most problems with studies, from mismeasurement to outright fraud (which, confidential surveys have revealed, is far more common in research than most people would suppose).

None of this is to say that researchers aren’t operating as good scientists, or that journals don’t care about the truth. Rather, the point is that scientists are human beings who, like all of us, crave success, status, and funding, and who make mistakes; and that journals are businesses that need readers and impact to thrive.

It’s one thing to be understanding of these challenges scientists and their journals face, and quite another to be ignorant of the problems they cause, or to fail to acknowledge those problems. But too many health journalists tend to simply pass along what scientists hand them—or worse, what the scientists’ PR departments hand them. Two separate 2012 studies of mass-media health articles, one published in PLoS Medicine and the other in The British Medical Journal, found that the content and quality of the articles roughly track the content and quality of the press releases that described the studies’ findings.

Given that published medical findings are, by the field’s own reckoning, more often wrong than right, a serious problem with health journalism is immediately apparent: A reporter who accurately reports findings is probably transmitting wrong findings. And because the media tend to pick the most exciting findings from journals to pass on to the public, they are in essence picking the worst of the worst. Health journalism, then, is largely based on a principle of survival of the wrongest. (Of course, I quote studies throughout this article to support my own assertions, including studies on the wrongness of other studies. Should these studies be trusted? Good luck in sorting that out! My advice: Look at the preponderance of evidence, and apply common sense liberally.)

What is a science journalist’s responsibility to openly question findings from highly credentialed scientists and trusted journals? There can only be one answer: The responsibility is large, and it clearly has been neglected. It’s not nearly enough to include in news reports the few mild qualifications attached to any study (“the study wasn’t large,” “the effect was modest,” “some subjects withdrew from the study partway through it”). Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.

David H. Freedman is a contributing editor at The Atlantic, and a consulting editor at Johns Hopkins Medicine International and at the McGill University Desautels Faculty of Management.