The FDA is supposed to reach a final decision on the safety of Bisphenol A (BPA)—a plastics additive found in many food and drink containers—by the end of this summer. Last month, STATS, a “statistical assessment service” affiliated with George Mason University, released an in-depth critique of the media’s coverage of the BPA debate.
It’s a tough story for reporters. The science behind BPA’s effects on human health is unresolved because the large-scale epidemiological studies needed to understand them are still underway, and laboratory research is limited to animal studies. There is still little certainty about health risks for adult humans. On the other hand, there is widespread concern among scientists and regulators that BPA exposure presents a threat to fetuses, infants, and young children. A number of regulators (including state and municipal governments in the U.S.) and businesses have been exercising the “precautionary principle,” proactively banning or limiting BPA’s use in food containers, especially baby bottles.
The twenty-four-page STATS report concluded that BPA coverage “across the media” was overwhelmingly one-sided, favoring charges that the chemical is a dangerous endocrine disrupter while ignoring any evidence to the contrary. However, the vast majority of the report, by STATS editor Trevor Butterworth, focused on a series by the Milwaukee Journal Sentinel, winner of the 2008 Oakes Award for distinguished environmental journalism presented by Columbia University. His special attention to the series makes his extrapolations to the media writ large somewhat suspect. And while Butterworth makes some fair criticisms of the Journal Sentinel, he tends to overplay his hand there, too.
Butterworth’s main gripes are that the Journal-Sentinel relied too heavily on Dr. Frederick vom Saal for information, a professor of biological sciences at the University of Missouri at Columbia, and didn’t spend enough time analyzing the methodology of BPA studies. The newspaper introduced vom Saal as an “internationally known expert,” which Butterworth challenged in his report:
The cumulative effect of all this research and statistical analysis is that vom Saal, though highly vocal about the risks of BPA and the media’s go-to source for explaining the science, has found his research and his claims repeatedly rejected in regulatory assessments of the chemical’s risk in the past decade.
Let’s be clear: vom Saal’s credentials in endocrine biology, with a focus on BPA, are valid. There’s no reason why journalists shouldn’t have used him as a source for their stories. That said, Butterworth is right that the press has been overly reliant on vom Saal for information and quotes. Time, Discover, and USA Today all featured stories on vom Saal and his position on BPA. The Journal-Sentinel sent products to his lab to test for BPA levels. True, vom Saal wasn’t reporters’ only source. In their first article on BPA, the Journal Sentinel quoted a director of an epidemiological center and a chief surgeon, both said BPA is a health risk. And there were a few media outlets that took a critical look at vom Saal’s experiments—most notably an Emmy-nominated TV segment by ABC 7 in San Francisco, for which STATS Research Director Dr. Rebecca Goldin was interviewed. But letting one voice dominate a story should always be a red flag for reporters. It doesn’t mean going out and looking for contrary points of view simply for the sake of journalistic “balance” (and some have criticized the ABC report for creating false balance on the question of BPA); it just means more interviews and more perspective.
As for Butterworth’s complaint that the Journal Sentinel failed to discriminate between trustworthy scientific studies and those that weren’t, he states:
…it would appear that no scientific criteria were applied to determining whether the studies were reliable or not; instead, the key criteria for judging was a positive finding for harm and whether the study was independent or industry funded. If a study found an effect and was independently funded it was significant; if a study didn’t find an effect and it was industry funded it was significant.