
In late 2011, in a nearly 6,000-word article in The New York Times Magazine, health writer Tara Parker-Pope laid out the scientific evidence that maintaining weight loss is a nearly impossible task—something that, in the words of one obesity scientist she quotes, only “rare individuals” can accomplish. Parker-Pope cites a number of studies that reveal the various biological mechanisms that align against people who’ve lost weight, ensuring that the weight comes back. These findings, she notes, produce a consistent and compelling picture by “adding to a growing body of evidence that challenges conventional thinking about obesity, weight loss, and willpower. For years, the advice to the overweight and obese has been that we simply need to eat less and exercise more. While there is truth to this guidance, it fails to take into account that the human body continues to fight against weight loss long after dieting has stopped. This translates into a sobering reality: once we become fat, most of us, despite our best efforts, will probably stay fat.”
But does this mean the obese should stop trying so hard to lose weight? Maybe. Parker-Pope makes sure to include the disclaimer that “nobody is saying” obese people should give up on weight loss, but after spending so much time explaining how the science “proves” it’s a wasted effort, her assurance sounds a little hollow.
The article is crammed with detailed scientific evidence and quotes from highly credentialed researchers. It’s also a compelling read, thanks to anecdotal accounts of the endless travails of would-be weight-losers, including Parker-Pope’s own frustrating failures to remove and keep off the extra 60 pounds or so she says she carries.
In short, it’s a well-reported, well-written, highly readable, and convincing piece of personal-health-science journalism that is careful to pin its claims to published research.
There’s really just one problem with Parker-Pope’s piece: Many, if not most, researchers and experts who work closely with the overweight and obese would pronounce its main thesis—that sustaining weight loss is nearly impossible—dead wrong, and misleading in a way that could seriously, if indirectly, damage the health of millions of people.
Many readers—including a number of physicians, nutritionists, and mental-health professionals—took to the blogs in the days after the article appeared to note its major omissions and flaws. These included the fact that the research Parker-Pope most prominently cites, featuring it in a long lead, was a tiny study that required its subjects to go on a near-starvation diet, a strategy that has long been known to produce intense food cravings and rebound weight gain; the fact that many programs and studies routinely record sustained weight-loss success rates in the 30-percent range; and Parker-Pope’s focus on willpower-driven, intense diet-and-exercise regimens as the main method of weight loss, when most experts have insisted for some time now that successful, long-term weight loss requires permanent, sustainable, satisfying lifestyle changes, bolstered by enlisting social support and reducing the temptations and triggers in our environments—the so-called “behavioral modification” approach typified by Weight Watchers, and backed by research studies again and again.
Echoing the sentiments of many experts, Barbara Berkeley, a physician who has long specialized in weight loss, blogged that the research Parker-Pope cites doesn’t match reality. “Scientific research needs to square with what we see in clinical practice,” she wrote. “If it doesn’t, we should question its validity.” David Katz, a prominent physician-researcher who runs the Yale Prevention Research Center and edits the journal Childhood Obesity, charged in his Huffington Post blog that Parker-Pope, by listing all the biological mechanisms that work against weight loss, was simply asking the wrong question. “Let’s beware the hidden peril of that genetic and biological understanding,” he wrote. “It can be hard to see what’s going on all around you while looking through the lens of a microscope.” In fact, most of us know people—friends, family members, colleagues—who have lost weight and kept it off for years by changing the way they eat and boosting their physical activity. They can’t all be freaks of biology, as Parker-Pope’s article implies.
The Times has run into similar trouble with other prominent articles purporting to cut through the supposed mystery of why the world keeps getting dangerously fatter. One such piece pointed the finger at sugar and high-fructose corn syrup, another at bacteria. But perhaps the most controversial of the Times’s solution-to-the-obesity-crisis articles was the magazine’s cover story in 2002, by science writer Gary Taubes, that made the case that high-fat diets are perfectly slimming—as long as one cuts out all carbohydrates. His article’s implicit claim that copious quantities of bacon are good for weight loss, while oatmeal, whole wheat, and fruit will inevitably fatten you up, had an enormous impact on the public’s efforts to lose weight, and to this day many people still turn to Atkins and other ultra-low-carb, eat-all-the-fat-you-want diets to try to shed excess pounds. Unfortunately, it’s an approach that leaves the vast majority of frontline obesity experts gritting their teeth, because while the strategy sometimes appears to hold up in studies, in the real world such dieters are rarely able to keep the weight off—to say nothing of the potential health risks of eating too much fat. And of course, the argument Taubes laid out stands in direct opposition to the claims of the Parker-Pope article. Indeed, most major Times articles on obesity contradict one another, and they all gainsay the longstanding consensus of the field.
The problem isn’t unique to the Times, or to the subject of weight loss. In all areas of personal health, we see prominent media reports that directly oppose well-established knowledge in the field, or that make it sound as if scientifically unresolved questions have been resolved. The media, for instance, have variously supported and shot down the notion that vitamin D supplements can protect against cancer, and that taking daily and low doses of aspirin extends life by protecting against heart attacks. Some reports have argued that frequent consumption of even modest amounts of alcohol leads to serious health risks, while others have reported that daily moderate alcohol consumption can be a healthy substitute for exercise. Articles sang the praises of new drugs like Avastin and Avandia before other articles deemed them dangerous, ineffective, or both.

What’s going on? The problem is not, as many would reflexively assume, the sloppiness of poorly trained science writers looking for sensational headlines, and ignoring scientific evidence in the process. Many of these articles were written by celebrated health-science journalists and published in respected magazines and newspapers; their arguments were backed up with what appears to be solid, balanced reporting and the careful citing of published scientific findings.
But personal-health journalists have fallen into a trap. Even while following what are considered the guidelines of good science reporting, they still manage to write articles that grossly mislead the public, often in ways that can lead to poor health decisions with catastrophic consequences. Blame a combination of the special nature of health advice, serious challenges in medical research, and the failure of science journalism to scrutinize the research it covers.
Personal-health coverage began to move to the fore in the late 1980s, in line with the media’s growing emphasis on “news you can use.” That increased attention to personal health ate into coverage of not only other science, but also of broader healthcare issues. A 2009 survey of members of the Association of Health Care Journalists found that more than half say “there is too much coverage of consumer or lifestyle health,” and more than two-thirds say there isn’t enough coverage of health policy, healthcare quality, and health disparities.
The author of a report based on that survey, Gary Schwitzer, a former University of Minnesota journalism researcher and now publisher of healthcare-journalism watchdog HealthNewsReview.org, also conducted a study in 2008 of 500 health-related stories published over a 22-month period in large newspapers. The results suggested that not only has personal-health coverage become invasively and inappropriately ubiquitous, it is of generally questionable quality, with about two-thirds of the articles found to have major flaws. The errors included exaggerating the prevalence and ravages of a disorder, ignoring potential side effects and other downsides to treatments, and failing to discuss alternative treatment options. In the survey, 44 percent of the 256 staff journalists who responded said that their organizations at times base stories almost entirely on press releases. Studies by other researchers have come to similar conclusions.
Thoughtful consumers with even a modest knowledge of health and medicine can discern at a glance that they are bombarded by superficial and sometimes misleading “news” of fad diets, miracle supplements, vaccine scares, and other exotic claims that are short on science, as well as endlessly recycled everyday advice, such as being sure to slather on sun protection. But often, even articles written by very good journalists, based on thorough reporting and highly credible sources, take stances that directly contradict those of other credible-seeming articles.
There is more at stake in these dueling stories than there would be if the topic at hand were, say, the true authorship of Shakespeare’s plays. Personal healthcare decisions affect our lifespan, the quality of our lives, and our productivity, and the result—our collective health—has an enormous impact on the economy. Thirty years ago, misleading health information in the press might not have been such a problem, since at the time physicians generally retained fairly tight control of patient testing and treatment decisions. Today, however, the patient is in the driver’s seat when it comes to personal health. What’s more, it is increasingly clear that the diseases that today wreak the most havoc—heart disease, cancer, diabetes, and Alzheimer’s—are most effectively dealt with not through medical treatment, but through personal lifestyle choices, such as diet, exercise, and smoking habits.
Consider the potential damage of bad weight-loss-related journalism. Obesity exacerbates virtually all major disease risks—and more than one in 20 deaths in the US is a premature death related to obesity, according to a 2007 Journal of the American Medical Association study. Obesity carries an annual price tag of as much as $5,000 a year in extra medical costs and lost productivity, for a total cost to the US economy of about $320 billion per year—a number that could quadruple within 10 years as obesity rates climb, according to some studies. (There is, of course, a lot of uncertainty in cost projections, and this research does not account for the impact of the Affordable Care Act.) On top of these costs are the subjective costs of the aches, discomforts, and compromised mobility associated with obesity.
Meanwhile, there’s a wide range of convincing-sounding yet wildly conflicting weight-loss-related claims made by prominent science journalists. People who might otherwise be able to lose weight on the sort of sensible, lifestyle-modification program recommended by most experts end up falling for the faddish, ineffective approaches touted in these articles, or are discouraged from trying at all. For example, innumerable articles (including Parker-Pope’s Times piece) have emphasized the notion that obesity is largely genetically determined. But study after study has shown that obesity tends to correlate to environment, not personal genome, as per the fact that people who emigrate from countries with traditionally low obesity rates, such as China, tend to hew to the obesity rates of their adopted countries. What’s more, global obesity rates are rapidly rising year by year, including in China, whereas the human genome barely changes over thousands of years. And studies clearly show that “obesity genes” are essentially neutralized by healthy behaviors such as exercise.
It is not encouraging to those trying to muster the focus and motivation to stick to a healthy-eating-and-exercise program to hear that their obesity is largely genetically determined, suggesting—sometimes explicitly—that the obese are doomed to remain so no matter what they do. A 2011 New England Journal of Medicine study (as reported in The New York Times) found that people tend to binge after they find out they carry a supposed fat-promoting gene. Other studies have shown—in keeping with common sense—that one of the best predictors of whether someone starting a weight-loss program will stick with it is how strongly the person believes it will succeed. When journalists erode that confidence with misleading messages, the results are easy to predict.
When science journalism goes astray, the usual suspect is a failure to report accurately and thoroughly on research published in peer-reviewed journals. In other words, science journalists are supposed to stick to what well-credentialed scientists are actually saying in or about their published findings—the journalists merely need to find a way to express this information in terms that are understandable and interesting to readers and viewers.

But some of the most damagingly misleading articles don’t stem from the reporter’s failure to do this. Rather, science reporters—along with most everyone else—tend to confuse the findings of published science research with the closest thing we have to the truth. But as is widely acknowledged among scientists themselves, and especially within medical science, the findings of published studies are beset by a number of problems that tend to make them untrustworthy, or at least render them exaggerated or oversimplified.
It’s easy enough to verify that something is going wrong with medical studies by simply looking up published findings on virtually any question in the field and noting how the findings contradict, sometimes sharply. To cite a few examples out of thousands, studies have found that hormone-replacement therapy is safe and effective, and also that it is dangerous and ineffective; that virtually every vitamin supplement lowers the risk of various diseases, and also that they do nothing for these diseases; that low-carb, high-fat diets are the most effective way to lose weight, and that high-carb, low-fat diets are the most effective way to lose weight; that surgery relieves back pain in most patients, and that back surgery is essentially a sham treatment; that cardiac patients fare better when someone secretly prays for them, and that secret prayer has no effect on cardiac patients. (Yes, these latter studies were undertaken by respected researchers and published in respected journals.)
Biostatisticians have studied the question of just how frequently published studies come up with wrong answers. A highly regarded researcher in this subfield of medical wrongness is John Ioannidis, who heads the Stanford Prevention Research Center, among other appointments. Using several different techniques, Ioannidis has determined that the overall wrongness rate in medicine’s top journals is about two thirds, and that estimate has been well-accepted in the medical field.
A frequent defense of this startling error rate is that the scientific process is supposed to wend its way through many wrong ideas before finally approaching truth. But that’s a complete mischaracterization of what’s going on here. Scientists might indeed be expected to come up with many mistaken explanations when investigating a disease or anything else. But these “mistakes” are supposed to come in the form of incorrect theories—that a certain drug is safe and effective for most people, that a certain type of diet is better than another for weight loss. The point of scientific studies is to determine whether a theory is right or wrong. A study that accurately finds a theory to be incorrect has arrived at a correct finding. A study that mistakenly concludes an incorrect theory is correct, or vice-versa, has arrived at a wrong finding. If scientists can’t reliably test the correctness of their theories, then science is in trouble—bad testing isn’t supposed to be part of the scientific process. Yet medical journals, as we’ve seen, are full of such unreliable findings.
Another frequent claim, especially within science journalism, is that the wrongness problems go away when reporters stick with randomized control trials (RCTs). These are the so-called gold standard of medical studies, and typically involve randomly assigning subjects to a treatment group or a non-treatment group, so that the two groups can be compared. But it isn’t true that journalistic problems stem from basing articles on studies that aren’t RCTs. Ioannidis and others have found that RCTs, too (even large ones), are plagued with inaccurate findings, if to a lesser extent. Remember that virtually every drug that gets pulled off the market when dangerous side effects emerge was proven “safe” in a large RCT. Even those studies of the effectiveness of third-party prayer were fairly large RCTs. Meanwhile, some of the best studies have not been rcts, including those that convincingly demonstrated the danger of cigarettes, and the effectiveness of seat belts.
Why do studies end up with wrong findings? In fact, there are so many distorting forces baked into the process of testing the accuracy of a medical theory, that it’s harder to explain how researchers manage to produce valid findings, aside from sheer luck. To cite just a few of these problems:
Mismeasurement To test the safety and efficacy of a drug, for example, what researchers really want to know is how thousands of people will fare long-term when taking the drug. But it would be unethical (and illegal) to give unproven drugs to thousands of people, and no one wants to wait 20 years for results. So scientists must rely on animal studies, which tend to translate poorly to humans, and on various short-cuts and indirect measurements in human studies that they hope give them a good indication of what a new drug is doing. The difficulty of setting up good human studies, and of making relevant, accurate measurements on people, plagues virtually all medical research.
Confounders Study subjects may lose weight on a certain diet, but was it because of the diet, or because of the support they got from doctors and others running the study? Or because they knew their habits and weight were being recorded? Or because they knew they could quit the diet when the study was over? So many factors affect every aspect of human health that it’s nearly impossible to tease them apart and see clearly the effect of changing any one of them.
Publication bias Research journals, like newsstand magazines, want exciting stories that will have impact on readers. That means they prefer studies that deliver the most interesting and important findings, such as that a new treatment works, or that a certain type of diet helps most people lose weight. If multiple research teams test a treatment, and all but one find the treatment doesn’t work, the journal might well be interested in publishing the one positive result, even though the most likely explanation for the oddball finding is that the researchers behind it made a mistake or perhaps fudged the data a bit. What’s more, since scientists’ careers depend on being published in prominent journals, and because there is intense competition to be published, scientists much prefer to come up with the exciting, important findings journals are looking for—even if it’s a wrong finding. Unfortunately, as Ioannidis and others have pointed out, the more exciting a finding, the more likely it is to be wrong. Typically, something is exciting specifically because it’s unexpected, and it’s unexpected typically because it’s less likely to occur. Thus, exciting findings are often unlikely findings, and unlikely findings are often unlikely for the simple reason that they’re wrong.

Ioannidis and others have noted that the supposed protection science offers to catch flawed findings—notably peer review and replication—is utterly ineffective at detecting most problems with studies, from mismeasurement to outright fraud (which, confidential surveys have revealed, is far more common in research than most people would suppose).
None of this is to say that researchers aren’t operating as good scientists, or that journals don’t care about the truth. Rather, the point is that scientists are human beings who, like all of us, crave success, status, and funding, and who make mistakes; and that journals are businesses that need readers and impact to thrive.
It’s one thing to be understanding of these challenges scientists and their journals face, and quite another to be ignorant of the problems they cause, or to fail to acknowledge those problems. But too many health journalists tend to simply pass along what scientists hand them—or worse, what the scientists’ PR departments hand them. Two separate 2012 studies of mass-media health articles, one published in PLoS Medicine and the other in The British Medical Journal, found that the content and quality of the articles roughly track the content and quality of the press releases that described the studies’ findings.
Given that published medical findings are, by the field’s own reckoning, more often wrong than right, a serious problem with health journalism is immediately apparent: A reporter who accurately reports findings is probably transmitting wrong findings. And because the media tend to pick the most exciting findings from journals to pass on to the public, they are in essence picking the worst of the worst. Health journalism, then, is largely based on a principle of survival of the wrongest. (Of course, I quote studies throughout this article to support my own assertions, including studies on the wrongness of other studies. Should these studies be trusted? Good luck in sorting that out! My advice: Look at the preponderance of evidence, and apply common sense liberally.)
What is a science journalist’s responsibility to openly question findings from highly credentialed scientists and trusted journals? There can only be one answer: The responsibility is large, and it clearly has been neglected. It’s not nearly enough to include in news reports the few mild qualifications attached to any study (“the study wasn’t large,” “the effect was modest,” “some subjects withdrew from the study partway through it”). Readers ought to be alerted, as a matter of course, to the fact that wrongness is embedded in the entire research system, and that few medical research findings ought to be considered completely reliable, regardless of the type of study, who conducted it, where it was published, or who says it’s a good study.
Worse still, health journalists are taking advantage of the wrongness problem. Presented with a range of conflicting findings for almost any interesting question, reporters are free to pick those that back up their preferred thesis—typically the exciting, controversial idea that their editors are counting on. When a reporter, for whatever reasons, wants to demonstrate that a particular type of diet works better than others—or that diets never work—there is a wealth of studies that will back him or her up, never mind all those other studies that have found exactly the opposite (or the studies can be mentioned, then explained away as “flawed”). For “balance,” just throw in a quote or two from a scientist whose opinion strays a bit from the thesis, then drown those quotes out with supportive quotes and more study findings.
Of course, journalists who question the general integrity of medical findings risk being branded as science “denialists,” lumped in with crackpots who insist evolution and climate change are nonsense. My own experience is that scientists themselves are generally supportive of journalists who raise these important issues, while science journalists are frequently hostile to the suggestion that research findings are rife with wrongness. Questioning most health-related findings isn’t denying good science—it’s demanding it.
Ironically, we see much more of this sort of skeptical, broad-perspective reporting on politics, where politicians’ claims and poll results are questioned and factchecked by journalists, and on business, where the views of CEOs and analysts and a range of data are played off against one another in order to provide a fuller, more nuanced picture.
Yet in health journalism (and in science journalism in general), scientists are treated as trustworthy heroes, and journalists proudly brag on their websites about the awards and recognition they’ve received from science associations—as if our goal should be to win the admiration of the scientists we’re covering, and to make it clear we’re eager to return the favor. The New York Times’s highly regarded science writer Dennis Overbye wrote in 2009 that scientists’ “values, among others, are honesty, doubt, respect for evidence, openness, accountability and tolerance and indeed hunger for opposing points of view.” But given what we know about the problems with scientific studies, anyone who wants to assert that science is being carried out by an army of Abraham Lincolns has a lot of explaining to do. Scientists themselves don’t make such a claim, so why would we do it on their behalf? We owe readers more than that. Their lives may depend on it.

The tossing in of Gary Taubes here is undeserved. He is one of the most rigorous vetters of studies, and his book "Good Calories, Bad Calories" is basically a cement block-sized metanalysis.
You did, however, leave out Jane Brody, who has been putting out unfounded crap and calling it science for decades, influencing millions.
Google Dr. Michael Eades and Tom D. Naughton and Brody's name to find a few good critiques of the bad science her pieces are just boiling over with.
#1 Posted by Amy Alkon, CJR on Wed 2 Jan 2013 at 12:54 PM
There's lots of complaining here but few realistic suggestions on how to improve medical reporting. The truth is that few reporters come from a medical background (why pay $150k in student loans to be a doctor so you can earn $50k as a reporter?)
There's more pressure to produce several stories a day, not to mention the whole blogging industry, most of whom simply reproduce the press releases. Sure, it'd be great if we could spend weeks breaking down studies and interviewing researchers. Have you been in a newsroom lately? If they have a health reporting, they don't have that kind of time.
And while Mr. Freedman brings out the hatchet for The New York Times, I can't help but note that both The Atlantic and Johns Hopkins Medicine International are guilty of the very bad habits Mr. Freedman mentions in this article.
Interesting that he failed to review his own publications. Something CJR should have required if they wanted readers to take this seriously.
#2 Posted by Rebecca X, CJR on Wed 2 Jan 2013 at 08:06 PM
Lately, I have been trying to get journalists more interested in "complex systems" reports -- not easy because most are not published in peer-reviewed journals until long after they are "news." Reporters consider peer review as sort of a magic talisman that wards off error and bias. Kudos to CJR for shooting some holes in THAT idea.
If it is any consolation, medical professionals themselves are often woefully under-educated in statistics and sample bias. Here's a link to a stats story I did for The Scientist almost a decade ago:
http://www.the-scientist.com/?articles.view/articleNo/15599/title/From-P-Values-to-Bayesian-Statistics--It-s-All-in-the-Numbers/
... and here's one I did for a multidisciplinary Columbia journal before that:
http://www.columbia.edu/cu/21stC/issue-3.3/ross.html
#3 Posted by Steve Ross, CJR on Wed 2 Jan 2013 at 10:50 PM
One of the biggest problems goes unmentioned by this article. It's that of "p values," or what is an acceptable false positive error rate. Medical research in general has a .05, or 5 percent, p value. In the "hard" physical sciences, it's 1/100 of 1 percent.
Now there's good reason for medicine to have a looser p value than physics. Testing quantum theory doesn't involved human lives. Nonetheless, modern medical research could easily adopt, and should, a p value of no looser than 3 percent, maybe even a bit tighter than that.
More rigorous studies will produce less "dueling research." (And fewer fringe scientists able to produce such research.)
#4 Posted by SocraticGadfly, CJR on Thu 3 Jan 2013 at 03:33 PM
This article strikes this reader as high handed and deeply under-informed, especially given its omniscient tone -- starting with the presumption that since weight loss experts say otherwise, and the study she quoted had limitations, Tara Parker Pope must be wrong. The books that will one day be written about our evidence-resistant cultural mythologies concerning weight and health. Obesity is resistant to change, the data are clear on that. Dietary fat, news flash, is not a health risk, the data are clear. Obesity epidemic is a moral panic. Paul Campos has documented this well. It is a symptom, not a cause of the illnesses of civilization, something Gary Taubes has documented well. The casual manner in which the writer swipes at Taubes only diminishes his credibility, and the notion that the professional consensus on a health subject should take precedence is really quite odd, given how much we now know about how wrong the professional consensus can be. The problems with health reporting are many but the writer managed to miss the big ones entirely -- the control of the clinical trial database by private industries and the construction of the literature by hired medical writing professionals.
#5 Posted by Paul Scott, CJR on Fri 4 Jan 2013 at 03:28 PM
The statement that "studies routinely record sustained weight-loss success rates in the 30-percent range" is absolutely incorrect, unless you are referring to weight loss surgery. I have never seen a single study that demonstrated long-term weight loss of close to 30%. Please provide a reputable reference for this statement. Also, please clarify what you mean when you say 30% - of original weight, excess weight, etc.
#6 Posted by Carrie, CJR on Sat 5 Jan 2013 at 03:06 PM
So Ms. Parker-Pope is wrong because experts say it is? It seems pretty representative of reality in my world. In my world, only the people having weight loss surgery keep it off.
#7 Posted by Connie, CJR on Sat 5 Jan 2013 at 03:55 PM
The medical fads that are produced by both health journalism for the public and by the premature acceptance by physicians of studies published in medical journals are potentially damaging to the people who apply the conclusion to their lives. The premature acceptance is also damaging to medical progress to the extent that corroboration by other studies is not demanded. The other half of the issue is why some of these significantly flawed studies were ever conducted and why they were published.
#8 Posted by Carol Vassar, MD, CJR on Sat 5 Jan 2013 at 04:09 PM
Mr. Freeman makes some very valid points, but evidence and research are best understood in the context of what works, for whom and under what circumstances.
Making lifestyle changes instead of relying on medicine is a point of view advocated by public health and preventative medicine advocates vs. medical specialists. Are they wrong? Are they right? Well, it is one point of view and it is one variable or set of variables. However, does it apply to every person on earth regardless of gender, socio-economic status, race, culture, genetics, co-occurring disease etc.?
Yes research should be questioned - and as the author notes that should include not only the research cited by Ms. Parker-Pope, but also the research cited by this author as proving the sources used by Ms. Parker-Pope wrong.
Journalists have a responsibility to present diverse viewpoints in context.
#9 Posted by K. Benson, CJR on Sat 5 Jan 2013 at 06:00 PM
Measurement errors and confounders usually cannot be avoided in population-based research, but they can be minimized. Scientists and students I know spend a lot of efforts trying to minimize errors and control for confounders in their studies. The author seems to overlook this and creates an impression that medical scientists conduct research however way they want. Like any human endeavors, science is limited by methods currently available. The key thing is to stay critical when interpreting research findings and keep the limitation of their methodologies in mind. However, I agree with Rebecca X that journalists probably do not have time to go through this critical process. Discussion about the limitations of health studies may not be interesting to the general public either. Maybe increasing the public's appreciation of science and scientific research can prevent readers from being misled.
#10 Posted by Mia, CJR on Sat 5 Jan 2013 at 06:04 PM
Couldn't agree more with the suggestion that science journalists act more like political journalists--the job should not be to cheerlead and champion science, but to skeptically, intelligently, independently report the story behind the surface. It's true that most journalists can't get deep training in statistics--but hey, neither do most MDs, as any biostatistician will be happy to tell you. Something any journalist can do is make friends with a properly skeptical biostatistics expert who can provide a bit of a BS filter. That said, one of the first places to start fixing things is the Atlantic, espeically online--while the author of this piece does admirable work, many others there are generating really silly and misleading stuff.
#11 Posted by Kat, CJR on Sat 5 Jan 2013 at 08:07 PM
I was baffled by Carrie's comment above that "The statement that 'studies routinely record sustained weight-loss success rates in the 30-percent range' is absolutely incorrect," since studies abound to back that statement up. But then I realized that she misinterpreted my statement. The "30 percent" refers not to amount of weight lost, but to percent of people who maintain any significant weight loss. "I concede my phrasing was somewhat ambiguous, sorry about that." The issue is not whether people can lose large amounts of weight, but whether people can maintain moderate weight loss. They can indeed, and people in behavioral programs do so routinely.
#12 Posted by David Freedman, CJR on Sat 5 Jan 2013 at 10:12 PM
It would be helpful to know how "successful" and "long-term" are defined in those studies in which 30% are successful (it would still be nice to see references to these studies). Many studies define a 5% weight loss at 1 year as long-term, successful weight loss. Is that how these studies define it?
#13 Posted by Carrie, CJR on Sun 6 Jan 2013 at 12:19 PM
Three points:
1. Medical reporters need to LISTEN to their sources and ALLOW those sources to rein in the hyperbole that sells papers. Reporters all want to hide behind the 'journalist's ethics' or 'my editor's rules' that forbid or discourage oversight of the final product by the source. It makes both source and writer look stupid on occasion. Who knows better, the knowledgeable source or the ignorant editor? And I use the "I" word advisedly.
2. The parachute paradox prevents some valuable RCT-type studies. Nobody has proven that it is safer to jump out of a plane with a parachute - the 'control arm' lacks volunteers.
3. Money talks. A paper recently published in a prominent medical journal claimed success where there was over 80% failure. The paper stank, but the sponsor of the study is a big hitter.
#14 Posted by wikiderm, CJR on Sun 6 Jan 2013 at 07:06 PM
David, the notion that "weight-loss success rates in the 30-percent range" supports the claim that people “can maintain moderate weight loss” obviously begs two questions: 1) what are you defining as long-term success, and 2) how are you defining “moderate”? It would be nice if you could cite a handful or so of the studies that you think support your take on weight loss success.
It can be argued also that in pointing out the importance of individual differences in weight loss success, one should also point out the importance of individual differences in weight loss failure. It is now recognized that obesity can result for very different reasons among different individuals, and that some causes of severe obesity (e.g., MC4 receptor mutations) are incredibly resistant to treatment, likely because the biobehavioral control system responsible for energy homeostasis embodies a severe defect in coupling feedback signaling related to fat storage to appropriate effector responses.
#15 Posted by Karl J. Kaiyala, CJR on Sun 6 Jan 2013 at 07:07 PM
Regarding wt loss. The "experts" refer to "moderate" wt loss, most often meaning a 5% wt loss. Try weighing 220 lb and losing 10 lb - is that "success?" Wadden's data, I guess to what the writer is referring, showed
#16 Posted by Richard, CJR on Sun 6 Jan 2013 at 08:37 PM
Yes, the participants in eight different types of weight reduction prorgamme in
Weight-loss outcomes: a systematic review and meta-analysis of weight-loss clinical trials with a minimum 1-year follow-up.(J Am Diet Assoc. 2007 Oct;107(10):1755-67.) lost 5-8.5 kg by six months, then regained. By 48 months, none had returned to base weight, but their loss was now only 3-6kgs. As Richard says: if a person weighing 40 kgs more than their recommended weight loses 5 kg, she's unlikely to celebrate that much - and if over the next couple of years, she regains 2 kgs, does that qualify as 'maintaining moderate weight loss? I don't think so...
#17 Posted by Mandi, CJR on Mon 7 Jan 2013 at 08:40 AM
Commenters above make good points. I would add that David Freedman oversimplifies and unfairly depicts a thoughtful and carefully written piece by Tara Parker-Pope. I recommend people read it for themselves. Also, what I find shocking to read in a magazine devoted to my profession is Freedman implying that covering the challenges of weight loss (as Parker-Pope did) is bad because it makes people eat. Yes, let's all be better social engineers in our reporting.
I expect better from CJR.
#18 Posted by Nick, CJR on Mon 7 Jan 2013 at 11:31 AM
I second Nick's recommendation (#18) to read Parker-Pope's article. I would also point out that Barbara Berkeley, M.D., is also Barbara Berkeley, saleswoman and diet book author, who has been in the weight loss business for quite some time. The failure to acknowledge this conflict of interest is corrosive to credibility. I also find annoying Freedman's appeal to anec-data, exemplified by the comment "In fact, most of us know people—friends, family members, colleagues—who have lost weight and kept it off for years by changing the way they eat and boosting their physical activity." Really? I think it far more likely that most of us know very few people who are successful in this task "for years" while knowing many whose success was ephemeral. I personally know just three long-term weight loss succeeders, and know that their success reflects the ability to endure a constant nagging desire for greater caloric intake, and the ability to engage in regular and substantial exercise.
#19 Posted by Karl J. Kaiyala, CJR on Mon 7 Jan 2013 at 03:52 PM
What the article, most health/medical writing AND most research tends to ignore is the issue of heterogeneity. Practically all studies are based upon averages, most often of a select group of patients. The findings may well apply to similar patients. But a significant portion of people are outliers and the data simply does not apply well to them.
The CATIE trial of psychoactive drugs showed that while one class of drugs works best in one group of patients, that drug often did not work well in another group of patients; they responded better to another class of drugs.
Researchers, physicians, and people who write about medicine need to begin to grapple with these issues of heterogeneity, it is the only way we are going to attain the promise of individualized medicine.
#20 Posted by Bob Roehr , CJR on Tue 8 Jan 2013 at 11:54 AM
David, This article gets at important larger issues plaguing health journalism that must feed the news beast. Attributions would be helpful, though. One point that caught my eye is the startling claim that Alzheimer's is best dealt with through lifestyle choices such as diet and exercise. Much appreciate reference to studies linking exercise/diet-induced reduction in amyloid brain deposits with actual changes in cognitive function. However advantageous, it appears that exercise is being oversold far beyond its capacity to stave off aging and the inevitable--yes, death. Thanks for further info and your thought-provoking feature.
#21 Posted by Jessica Seigel , CJR on Tue 8 Jan 2013 at 04:09 PM
It was hard to get past the snarky paragraph that gives a pretty unconvincing reason to devalue Gary Taube's presentation of why carbohydrates, not dietary fat, are making people obese and sick - apparently we should doubt Taubes' writing because it "leaves the vast majority of frontline obesity experts gritting their teeth." But then, in passing, Freedman adds (referring to the implications of Taubes' review of decades of research) that "the strategy sometimes appears to hold up in studies." This is followed by an offhand mention of how we all know that eating too much fat is associated with increased health risks, and uses the phrase "eating too much fat" to link to-- surprise, NOT a page about the dangers of a high fat diet--but instead to a page about the risks of a high protein diet. For me, I guess that did prove the point of the article, but probably not in the way Freedman intended.
#22 Posted by Patty Dineen, CJR on Wed 9 Jan 2013 at 11:01 AM
I found Freedman's article to be a combination of good advice that medical writers have taken and shared for decades, and wild overreaching about the problems of medical reporting. He argues that medical reporters should not trust published studies, because many are wrong. And his alternative is: nothing. He offers no way out. In a substantial post at the Knight Science Journalism Tracker, I strongly disagreed with his assertions and conclusions. You can see the post here:
http://ksj.mit.edu/tracker/2013/01/columbia-journalism-review-personal-heal
Thanks.
#23 Posted by Paul Raeburn, CJR on Wed 9 Jan 2013 at 04:13 PM
I appreciate the close attention that Paul Raeburn has given my article in his blog post, which I encourage all to read. But I'm a little baffled by his three main points.
One is that he says I don't admit exceptions. I thought that was pretty clearly implied, but OK, there are exceptions. Glad we cleared that up.
His second point is to note (as I did myself in the article, and have written about at length elsewhere) than I necessarily end up on slippery ground when I use some of the standard techniques of science journalism to argue that science journalism is highly misleading--because that would suggest my own criticism might be misleading. This is the famous "liar's paradox." But Raeburn seems not to understand its essence, arguing that it negates my claims. He's flat-out wrong. As many first-graders know, the liar's paradox leaves one unable to clearly prove truth or untruth--that's why it's called a paradox, and not the liar's error. All of mathematics suffers from essentially the same problem, and it doesn't make mathematics wrong, or "a house of cards."
His final main point is that I offer no way out. Really? I think I pretty clearly say in the article that science journalists ought to be constantly reminding readers that published research tends to be highly unreliable. I don't think advice gets more straightforward, simple and practical than that. Well, maybe what he means is that he doesn't find that advice palatable. Not a lot of fun to have to tell readers they may need to ignore everything you just told them, is it? No wonder so few science journalists do it. But it's the right thing to do.
I understand that people who have built their careers entirely around unskeptically transmitting the claims of published research find it disconcerting to be confronted with the notion that they have been highly misleading. But I'd hope it would lead to constructive exploration of the problem, and not just defensive efforts to discredit it.
#24 Posted by David Freedman, CJR on Thu 10 Jan 2013 at 02:01 PM
Schools across the country are forced to buy millions of textbooks with
thousands of factual errors (not politically correct errors). Science, history,
social studies, etc. School districts and state departments of education are
not yet making publishers correct them nor including factual accuracy as a
requirement for adopting textbooks. So, millions of students are being
mis-educated daily.
Examples:
A. Linda Ronstadt is a crystal.
B. Southern border of California is Rio Grande.
C. Equator goes through southern US.
D. Dodo bird was first species to go extinct.
E. Pounds is a unit of pressure.
F. Rosetta Stone was found in 1899. (It was 1799.)
G. Five different definitions of "rock".
See hundreds more at www.textbooktrust.org.
Prof. John Hubisz North Carolina State University (Raleigh) conducted a study
of science textbook series and found thousands of errors. Study was sponsored by
David and Lucile Packard Foundation. See www.science-house.org/middleschool,
hubisz@ncsu.edu.
School systems should require a warranty on factual accuracy for any textbook
they use. All textbooks should be 100% authoritative. The process can be very
simple, very inexpensive, and very effective.
In addition to public schools, other textbook buyers are Catholic and other
religion-sponsored schools, the Department of Defense dependent schools, and
homeschoolers.
See our website www.textbooktrust.org.
Carl Olson
Founder, Textbook Trust
P.O. Box 6102
Woodland Hills, California 91365
818-223-8080
#25 Posted by Carl Olson, CJR on Thu 10 Jan 2013 at 03:10 PM
Reading Freedman's table-setting article and the four pieces that followed it was a positively Orwellian experience, beginning with his statement that fully two-thirds of published scientific research findings are wrong.
So...the intrepid CJR team encounters a man at the border of a country who warns, "All the people in my country are wrong two-thirds of the time."
Then it roams around that country with notebook and camera without ever again addressing that warning and its obvious implications.
The lively science journalist for HuffPo is profiled but we never learn how - or whether - she successfully navigates the minefield of predominantly inaccurate scientific researach. Media coverage of tainted food is condemned despite an acknowledgement of the lack of reliable scientific findings linking pathogens and food sources - two-thirds of which would, apparently, be wrong anyway. A photographer's project on hydrofracking is presented without a mention of the disputed research findings in that battlefield of science.
The final step into the Twilight Zone came in Freedman's bio box, in which he admits that "he has been guilty of all the failures of health journalism he describes in this article." Really? REALLY?? And was the article I just read a scene of his crimes?
On that front, he helpfully advises: "Of course, I quote studies throughout this article to support my own assertions, including studies on the wrongness of other studies. Should these studies be trusted? Good luck in sorting that out! My advice: Look at the preponderance of evidence, and apply common sense liberally."
And where would we, his readers, find the "evidence" by which we can sort out whether his "evidence" should be trusted? How will we know which one-third of it is correct? If common sense is so helpful, why bother with unreliable scientific research at all? But didn't people once argue that common sense proved the world was flat because we don't all fall off the planet? How is "common sense" different from the "conventional wisdom" we all know we should question - because it is, demonstrably, so frequently wrong?
After reading all five articles, I was swept away by the frustration and futility of reading, much less writing, about scientific research at all. If that was the intended result, bravo! Mission accomplished.
Diana B. Henriques
#26 Posted by Diana B Henriques, CJR on Fri 11 Jan 2013 at 11:01 AM
Good article, I really enjoyed it. "Look at the preponderance of evidence, and apply common sense liberally." I would add to that, "science/health writers should be research-literate, able to understand what is a well-designed study as opposed to one that is not remotely."
The issue of blending personal experience with science and reporting on the combination, which he raised at the beginning of the article, is one I think is worthy of further conversation. Reporters, like bloggers, are sometimes trying to make sense of their own experience in light of "what science says." Being very careful about examining and explaining one's dual motives to "tell what's true for you" and "report what's true" seems essential to writing fairly about health and science.
#27 Posted by Jess, CJR on Thu 17 Jan 2013 at 12:09 PM
Thanks for a good article. My background is in biomedical research, and most of the points in the article are consistent with my own experience regarding the flaws and inconsistencies found in modern research. An important part of many graduate school programs is "journal club," where students and faculty gather together to examine and critique the leading peer-reviewed papers of the day. Many of the most influential, widely cited papers also are among the most widely torn apart. Ioannidis' writings are squarely on target.
Now, take the ideas from this CJR article as they apply to biomedicine, and apply them to fields where the variables are less well-defined, are more complex, and where hypotheses are less easily tested under controlled conditions. What might be said about psychology? Economics? Climate change?
#28 Posted by Boo, CJR on Thu 17 Jan 2013 at 05:46 PM
I agree with the first commenter that this is harsh on Taubes. He's really very focused on the science as proved via reliable studies/clincal trials, and his main gripe with the conventional wisdom on what makes for healthy eating is that there's not enough evidence for it. Perhaps Taubes's initial support for the Atkins-style diet was prompted by his own personal experience -- he easily lost a lot of weight -- but he moved way beyond that, and is now involved with a not-for-profit trying to set up studies -- I think with NIH -- so we can really find out/prove some of the claims about what we're actually supposed to eating.
#29 Posted by Sophie Roell, CJR on Thu 17 Jan 2013 at 08:16 PM
Wow, there is quite a lot of contention here and (in my experience in America) support for the idea that weight loss is insanely difficult.
Maybe people really are just physically wired to make weight loss almost impossible--I can't generalize from my own case, but I'm at least anecdotal evidence: I lost 55 pounds in three years after being moderately overweight for ten.
I do find the American insistence on this idea strange. My understanding is that obesity levels have been rising steadily in America for a long time and I would imagine this is almost entirely due to diet, behavior, and lifestyle (not as a consequence of highly specific mutations, or an increase in people with fat genes, etc).
I'm originally from Hong Kong and have also spent time on mainland China--Hongkongese can run on the lean side but nowhere nearly as much as mainlanders typically do, I think. I believe we're probably genetically very similar. Hence: other factors seem more important.
Looking across cultures/ethnicities and subgroups *within* cultures I've never seen a strong reason to believe that your weight is wired-in so much as it has to do with lifestyle. Maybe *losing* weight is actually equally hard for everyone and some cultures are more likely to promote unhealthy lifestyles earlier to put you there, but the potential for weight-gain (looking at large populations) seems at least only slightly dependent on biological predispositions. Obviously on the individual level your biology could be anything (though maybe you shouldn't assume it is too rare/special/difficult to work with), but weight gain/loss broadly seems to correlate a lot better with geography, culture, and behavior.
Given my thoughts about this I tend to conclude that even if we are willing to go so far as to say that we shouldn't blame people for their inability to lose weight citing the irrelevance of diet and behavior (which is a claim I already find dubious, but let's allow it) that there is at least very strong evidence that the lifestyle you are brought up with will ultimately be an insanely huge factor in your long-term health and long-term body-weight measurements--people raising kids effectively who try very intelligently to keep them fit and healthy and well-fed will very likely find those groups of kids to NOT be nearly as overweight as the average American is.
#30 Posted by J. W., CJR on Fri 18 Jan 2013 at 04:04 AM
I found the slam at Gary Taubes ironic, in that much of his "Good Calories, Bad Calories" book was an involved take-down of incredibly poor quality obeisity and other contemporary medical research! Granted it perhaps should've been two books--one showing how miserable the state of affairs has been in dietary science, the other promoting the low-carb hypothesis--but he clearly calls for his ideas to be tested rigorously, and in direct competition with the (failed) reigning notions.
I just finished reading Freedman's "Wrong" and thoroughly enjoyed it, or most of it anyway--again, when discussing these diet issues the author makes sloppy mistakes like thinking dietary fat is the bogeyman.
It appears, David, that on this issue you've (consciously or not) bought into the wrong clan of experts!
#31 Posted by GregB, CJR on Wed 23 Jan 2013 at 03:35 PM
If Gary Taubes had written Good Calories Bad Calories as two books, as GregB suggests, I'd instantly declare myself the world's biggest fan of the obesity-science-has-gotten-it-all-wrong book. I agree that Taubes is to be commended for fiercely advocating that his own arguments be put to the test in scientific studies (and he is in fact working tirelessly to try to make that happen). But I'd also like to reiterate that it is an enormous mistake to put too much faith in the findings of any studies, and especially medical studies, and most especially nutrition and obesity studies. I don't have space to reiterate all the reasons here, but let me just once again point out that these studies have been conflicting with one another forever, and it is just plain silly to imagine that the problems in these studies that have led to these starkly conflicting results are about to go away just because someone claims they've really figured out how to do a good job with this sort of study.
As a recent example of how lousy a job scientific studies have done in illuminating the causes of and likely solutions to obesity, I'd point to this obesity "mythbusting" article that just ran in the New England Journal of Medicine, http://www.nejm.org/doi/full/10.1056/NEJMsa1208051. I would echo the comments of many respected experts who have weighed in to say that some of this article's claims are highly dubious--for example that the benefits of gradual weight loss and of setting realistic goals are "myths," while diet drugs are the way to go. Really? One way scientists manage to come up with this sort of nonsense is by failing to account for the enormous differences between the highly artificial (and typically highly biased) conditions that obtain in medical studies and what happens in the big, complex, messy real world. (And needless to say, any number of science journalists passed on the findings of this study without any skepticism or qualification.)
By the way, GregB, thanks for the kind words about Wrong, but I'm pretty sure I never said there or anywhere else that dietary fat is the root of all evil in obesity, you must have misread me. (Or maybe I was unclear.) I try to clearly argue that there is no bogeyman in obesity, and (sadly) no silver bullet. I do say that reducing both dietary fat and simple carbs tends to help most people achieve and stick to a healthier long-term diet, but even that is just one of many, many things that have to be done to help most people to succeed. Fortunately, all of those things are perfectly doable.
#32 Posted by David H. Freedman, CJR on Sun 3 Feb 2013 at 02:39 AM
In other words, we need to be more skeptical of science reporting. (My casual read-through of the article found the notion of skepticism oddly absent, though 'doubt' is mentioned as an excellence scientists esteem.) I am frequently alarmed in this 'Age of Science' by the prevalence of the belief that, standing on the bedrock of empirical proof, we have left all uncertainties behind. But natural science without the ability to defer final judgment (a reasonably deployed skepticism) is really no better than divination.
"Scientists now think..." is a headline that positively makes me squirm. Imagine a headline about what politicians "now think." Our first question would be: politicians of which party? People assume scientists are utterly impartial. Yet scientists' 'findings' only reach the public through the interpretations of science reporters. Whether it's the health benefits of wine, the latest correlations of neuro-science, the "gay" gene, or what have you, "science" appears to speak with authority. The public needs to be educated as to why that is in fact an illusion. They need a dose of philosophy of science (and insight into the power of suggestion).
Mix the pragmatic [results-oriented] anti-intellectual attitude of the general public, especially in the US, with credulity in the unambiguous results of scientific findings, and you have the situation here described: a near faith-based production and consumption of research-reporting.
The multitude craves certainty and guidance. Science reporters, as popularizers, respond to that need. Couching their articles in the epistemic-methodological qualifications of truly responsible/impartial reporting, would only frustrate that need by watering-down the message of practical applicability. Reporters are between a rock and a hard place: they must distill the findings "now" emerging while offering guidance to readers seeking advice. To do the first thing right (interpreting results for their intrinsic generalizability and import) they would need to have an over-view of their field and its history that perhaps only specialists and historians dispose of. To deliver the goods of giving direction they require a capacity to direct that is anything but impartial--one that is in essence political. They must fail in the former if they yield to the temptation of the latter.
#33 Posted by C.I. Rothlind, CJR on Tue 5 Feb 2013 at 01:51 PM
Taubes is a hack. He does exactly what this article says most writers do.
#34 Posted by Thomas S, CJR on Tue 5 Feb 2013 at 03:58 PM
It is really getting old hearing about "hired medical writing professionals" being an issue. I am sure there are a few bad apples as there are with any field. However, every writer I know is a highly ethical person who is interested in science and clearly and accurately communicating research results. We summarize the data we are given to work with; we do not make up stuff as implied by "constructing literature." Perhaps the commenter would like to familiarize himself with our professional code of conduct at http://www.amwa.org/default.asp?id=114
#35 Posted by Trisha, CJR on Sat 16 Feb 2013 at 03:08 PM
I agree with the folks above who note that not all science writers are made equal. I read a lot about these subjects and I've found that Gary Taubes is a reliable and trustworthy guide to the science as best he understands it. He does his homework. He is candid about his own ideas and what it would take to test them, too. The 7th paragraph in this article that goes, 'one such study, another study, etc,' is totally disingenuous. It is designed to imply that there is a whole lot of noise about diet, but actually includes two of Taubes's articles which say much of the same thing: Diets that are lower in fat and protein are necessarily higher in sugars and simple starches and are, therefore, obesogenic. I'm not aware of any research that has shown that it's possible to improve health or lose weight without reducing sugar or intake of simple carbohydrates. If anyone knows of something I'm missing, please write back.
#36 Posted by David Bornstein, CJR on Wed 27 Feb 2013 at 12:58 PM