the observatory

Playing the study game

David Freedman responds to critics of his article about bad health reporting
January 9, 2013

Recently in the pages of the CJR, I took on science journalism’s lack of skepticism and misuse of published scientific studies, especially with regard to personal health. (“Survival of the Wrongest,” January/February 2013.) To highlight some of the problems, I look at reporting on obesity and long-term weight loss. I argued that this reporting has been focused on catchy, simplistic—and utterly wrong—ideas, such as that long-term weight loss is nearly impossible, or that it can be easily achieved by cutting out one major food group or another.

The fact that reporters are able to support these misleading and conflicting claims with published studies is a clear sign that something has gone badly wrong with both science and science journalism. Specifically, I charge that journalists have consistently failed to point out (if they know it at all) that the findings of medical studies have been inarguably shown by scientists themselves to be highly unreliable. Even worse, journalists are taking advantage of the problem by enlisting studies to make their arguments seem like scientific truth, potentially wreaking enormous damage on their readers’ health.

Not surprisingly, some readers object to my argument, or parts of it. (As I noted in the article, science journalists, unlike scientists themselves, tend to react badly to the assertion that published scientific findings are untrustworthy.) Let me address some of the posted comments.

First off, I’m a little surprised and, frankly, annoyed, that CJR readers would really need me to dig up for them citations of some of the many studies that have come up with positive long-term weight-loss results. Aside from disliking having to do people’s homework for them, I hate playing the study game in general—there are studies that back almost any assertion up. After I provide the citations to those who demand them, they will invariably then tell me why these studies are flawed, or don’t really back me up, and they will refer me to “good” studies that back them up. No one learns anything from these dueling-studies debates. But fine, here are a few citations, from over the years:

“Long-Term weight loss maintenance,” American Journal of Clinical Nutrition
“Evidence for success of behavior modification in weight loss and control,” Annals of Internal Medicine
“Effectiveness of Weight Management Interventions in Children,” Pediatrics
“Long-Term Weight Maintenance after an Intensive Weight-Loss Program,” Journal of the American College of Nutrition

In the recently halted “Look AHEAD” study, a group of thousands of older, significantly overweight people with diabetes—considered the toughest group with which to achieve long-term weight loss—lost and kept off an average of 5 percent of their body weight for ten years. Yes, it’s only 5 percent (again, in a very tough group). And it’s true that some studies with relatively high percentages of people who keep weight off long term use as little as 5 percent of body weight as a threshold, though some find quite a bit more. (Plus, remember, the studies typically report averages or minimums, which means there were many in the Look AHEAD and other studies who lost more, and sometimes a lot more.) But I wouldn’t be so quick to dismiss a 5 percent loss as nothing to celebrate. For one thing, it’s proof of concept, putting the lie to claims that we are genetically fixed to have a certain weight. For another (the Look AHEAD study being a confusing exception), most studies have found that even small amounts of weight loss can make a big difference in health.

Sign up for CJR's daily email

The failure of most studies to find long-term weight-loss maintenance is due to the generally poor weight-loss programs that subjects are exposed to in these studies. These studies tend to rely on simplistic approaches like prescribing cutting calories and exercise without good guidance and support, or cutting carbs or fat, or handing over a pile of weight-loss literature, or—big high-tech twist—directing people to a website with weight-loss advice. Usually ignored is the one technique that’s been proven to work over and over again over the long term: A comprehensive behavioral support program.

Regarding Karl J. Kaiyala’s comment that I dared to quote Barbara Berkeley, a physician who—gasp—sells books and diet programs: Another great myth and prejudice of science reporting is that we want to heed only pure scientific studies, not the observations of experts who are actually out in the real world and therefore have all kinds of biases. I’ve got news for everyone: Ask any scientist—as apparently relatively few science writers do—about how free the academic science world is of bias, politics, and the corrupting drive for success.

These biases exist everywhere, they’re part and parcel of being a human being. We have to try to use our knowledge of the world to avoid being taken in by these biases. Turning your back on most insights that come from people who actually work in the real world putting ideas into practice, and embracing most of those that come from academia, is not a really smart way to do it. In the realm of weight loss (among others), I’d much prefer to listen to experts who are actually running successful programs than to academics who are publishing immaculate studies. Fortunately, I don’t have to choose, and try to listen to both.

And in further reference to Kaiyala’s note: I’m fascinated to hear he’s known only three people in his entire life who have lost and kept off weight. If he says so. Maybe it’s geographic—I know more people on my block who have pulled this off than he’s known anywhere in his entire life, go figure. Sure, those who succeed are a small percentage of those who try—thanks to the inappropriate approaches most people take to trying to lose weight, approaches they frequently read about from our leading science writers in our leading publications, supported by the findings of published studies. But I don’t think the success stories are quite as rare as Kaiyala’s personal observations would suggest.

In any case, I wonder how Kaiyala’s three acquaintances defeated the supposedly enormous forces of genetics arrayed against them? And whatever the means, why couldn’t others do it, with the right support? How did the vast majority of the human race pull off this amazing, biology-defeating feat for thousands and thousands of years, up until the past few decades? Again, we need to apply common sense. This isn’t all about biology. It’s about the interaction between biology and the environment. We can’t much control biology (even though it’s what most scientists, and science writers, focus on), but we absolutely can control our environments. That’s what a behavioral program does.

How have so many scientists and science writers missed these obvious points? Let me suggest an answer: Because behavioral approaches are complex and messy—like human beings and the world around us–and don’t lend themselves to single-variable randomized, controlled trials. That means they don’t make for highly publishable papers, they don’t give editors the breakthrough, study-backed revelations they’re looking for, and they don’t give the public the silver-bullet solutions it craves. Science writers who can’t tell the difference between highly publishable claims and claims that hold up in the real world are doing society a disservice.

I join the commenters in urging people to read Tara Parker-Pope’s New York Times Magazine article about the near impossibility of sustained weight loss in its entirety. I didn’t criticize the article for being discouraging to dieters. I criticized the article for being wrong. And I noted that being wrong, in this case, creates needless discouragement to dieters that could in theory be measured in thousands and perhaps even millions of person-years of life lost.

Amy Alkon objects to my taking Gary Taubes’ very-low-carb-diet-promoting Times Magazine article to task, noting that Taubes has been a tireless campaigner against misleading studies. That’s true. Taubes (who is a friend) is in many ways a role model for the appropriately skeptical science writer. But I do think he’s off track with the very-low-carb stuff, mostly because people have so much trouble sticking with that sort of extreme diet. But people should read Taubes for themselves and decide. At a minimum, the’ll learn a lot about the problems with misleading studies.

Rebecca X objects to my omitting the institutions I work with, including The Atlantic and Johns Hopkins, when criticizing the insufficiently skeptical transmission of study findings. OK, The Atlantic and Johns Hopkins do it, too. Every publication and organization I work with does. I do it myself, all the time. Am I credible now?

David H. Freedman is a contributing editor at The Atlantic, and a consulting editor at Johns Hopkins Medicine International and at the McGill University Desautels Faculty of Management.