The nuclear crisis in Japan keeps on revealing how the news media struggle to report accurately and thoroughly about risk. The latest example is coverage of a new Union of Concerned Scientists (UCS) analysis, which estimates that radiation from Chernobyl will cause many more cancer deaths than the UN officially estimated.
To be fair, radiation risk is complex and controversial even among the experts. But this example of sloppy risk reporting isn’t about the challenges of covering a complex issue, nor is this the standard lament about breathless alarmism/sensationalism. This is about a far more common and far more troubling problem that pervades news coverage of risk. And it’s an easy one to fix.
Here’s the background: The World Health Organization estimates that the lifetime radiation-induced cancer death toll from Chernobyl will be about 4,000, out of the 600,000 people exposed to higher doses of radiation—about two thirds of one percent. Several anti-nuclear groups say the number is much higher. One of them, the UCS, says the WHO made a mistake by considering only the population that got higher doses, since the default assumption of most government radiation regulations is that the only safe dose of radiation is NO dose, so even a little radiation raises the risk of cancer somewhat. And since the radiation from Chernobyl spread around the entire globe, a fair consideration of the cancer threat needs to consider how much it raised the risk for everybody.
Based on how much of a dose people in various regions got, the UCS calculated that the total global radiation-induced cancer death toll from Chernobyl will be 27,000. That’s a lot more worrisome than 4,000—“6 times higher” than the WHO, the UCS notes, and certainly worthy of coverage, which it got from news organizations including the Los Angeles Times, Time magazine, The Australian (reporting that the UCS predicated a death toll of 50,000, for instance), and the BBC/PRI/WGBH radio program PRI’s the World.
They all reported on the new higher total death toll from Chernobyl. But that’s not enough. While the ‘absolute’ risk, the total number of victims, is important to help put any risk in perspective, it’s also important for readers/viewers/listeners to know the ‘relative’ risk, the percentage of expected victims out of the whole population, the odds that out of everybody, the risk can happen to any one person, which is also important for how big or small a risk seems. Probability is something readers/viewers/listeners want to know, and something any good news report about risk should tell them. This is part of Risk Reporting 101. But none of the coverage of the UCS analysis included that vital second number.
To make the importance of both absolute and relative risk clear, here’s a chart of what the UCS found, using their absolute numbers and adding the relative risk in the column on the right. (The UCS did not analyze relative risk, which is not surprising, since they are avowedly anti-nuke, and the relative risk for their various regional population subgroups is just as low, or in some cases far lower, than the WHO’s odds for the population they included less than 1 percent.)
(The UCS total is 26,300. They rounded it up to 27,000 in their release)
It’s one thing if an advocacy group wants to use numbers selectively to strengthen their case. Fair enough. That’s what advocates do. But news reporting on risk is supposed to give news consumers all the information they need to make informed, healthy choices. Coverage of risk that fails to include both absolute and relative risk fails that basic test.