Meanwhile, there’s a wide range of convincing-sounding yet wildly conflicting weight-loss-related claims made by prominent science journalists. People who might otherwise be able to lose weight on the sort of sensible, lifestyle-modification program recommended by most experts end up falling for the faddish, ineffective approaches touted in these articles, or are discouraged from trying at all. For example, innumerable articles (including Parker-Pope’s Times piece) have emphasized the notion that obesity is largely genetically determined. But study after study has shown that obesity tends to correlate to environment, not personal genome, as per the fact that people who emigrate from countries with traditionally low obesity rates, such as China, tend to hew to the obesity rates of their adopted countries. What’s more, global obesity rates are rapidly rising year by year, including in China, whereas the human genome barely changes over thousands of years. And studies clearly show that “obesity genes” are essentially neutralized by healthy behaviors such as exercise.
It is not encouraging to those trying to muster the focus and motivation to stick to a healthy-eating-and-exercise program to hear that their obesity is largely genetically determined, suggesting—sometimes explicitly—that the obese are doomed to remain so no matter what they do. A 2011 New England Journal of Medicine study (as reported in The New York Times) found that people tend to binge after they find out they carry a supposed fat-promoting gene. Other studies have shown—in keeping with common sense—that one of the best predictors of whether someone starting a weight-loss program will stick with it is how strongly the person believes it will succeed. When journalists erode that confidence with misleading messages, the results are easy to predict.
When science journalism goes astray, the usual suspect is a failure to report accurately and thoroughly on research published in peer-reviewed journals. In other words, science journalists are supposed to stick to what well-credentialed scientists are actually saying in or about their published findings—the journalists merely need to find a way to express this information in terms that are understandable and interesting to readers and viewers.
But some of the most damagingly misleading articles don’t stem from the reporter’s failure to do this. Rather, science reporters—along with most everyone else—tend to confuse the findings of published science research with the closest thing we have to the truth. But as is widely acknowledged among scientists themselves, and especially within medical science, the findings of published studies are beset by a number of problems that tend to make them untrustworthy, or at least render them exaggerated or oversimplified.
It’s easy enough to verify that something is going wrong with medical studies by simply looking up published findings on virtually any question in the field and noting how the findings contradict, sometimes sharply. To cite a few examples out of thousands, studies have found that hormone-replacement therapy is safe and effective, and also that it is dangerous and ineffective; that virtually every vitamin supplement lowers the risk of various diseases, and also that they do nothing for these diseases; that low-carb, high-fat diets are the most effective way to lose weight, and that high-carb, low-fat diets are the most effective way to lose weight; that surgery relieves back pain in most patients, and that back surgery is essentially a sham treatment; that cardiac patients fare better when someone secretly prays for them, and that secret prayer has no effect on cardiac patients. (Yes, these latter studies were undertaken by respected researchers and published in respected journals.)
Biostatisticians have studied the question of just how frequently published studies come up with wrong answers. A highly regarded researcher in this subfield of medical wrongness is John Ioannidis, who heads the Stanford Prevention Research Center, among other appointments. Using several different techniques, Ioannidis has determined that the overall wrongness rate in medicine’s top journals is about two thirds, and that estimate has been well-accepted in the medical field.