Arguing that The Daily Mail had “overstated” the risk of eating rice, an analysis at the UK’s excellent NHS Choices website, run by the publicly-funded National Health Service, explained that:
The main limitation with cohort studies such as the ones pooled in this review is that they may not have adjusted for all relevant factors that could be associated with intake of rice and with risk of diabetes. These include other dietary factors such as alcohol intake, physical activity and being overweight or obese. Also, studies assessing food intake can be particularly prone to some inaccuracy. Participants usually have to estimate their typical dietary intake, which can be hard to recall and variable over time.
Despite these significant limitations, however, journalists continue to seize upon the latest observational research studies and tout their findings with little to no qualification. As Gary Schwitzer put it in a post for HealthNewsReview in early March:
Last week we saw stories about citrus fruits protecting women from stroke. This week it was stories about “sleeping pills could kill 500,000.” This week we also had stories about “omega-3 fatty acids protecting the aging brain” and about “Vitamin A may slash melanoma risk.” Sometimes it’s stories about lower risk (or protection), sometimes it’s stories about higher risk.
Week after week, year after year, for 6 years now, we have written about news stories that fail to explain the limitations of observational studies to readers. They use causal language - suggesting cause-and-effect findings - for studies that cannot prove cause-and-effect.
Unfortunately, those stories are like candy to undiscerning readers, but reporters who want to do better can turn to HealthNewsReview’s useful primer on observational studies titled, “Does the Language Fit the Evidence? Association Versus Causation.” It’s a simple, clear-cut guide to how various studies are designed and to how journalists can evaluate them with a more skeptical eye.