the observatory

FiveThirtyEight’s disappointing science section

Science journalism could use an infusion of analysis, but FiveThirtyEight isn't yet doing it rigorously or objectively
March 20, 2014

The internet hates Nate Silver today–at least the small quotient closely following the launch of his new site, FiveThirtyEight, this week–and with ample reason. When Silver wrote, “It’s time to make news a little nerdier,” in his site-launch manifesto, he was issuing a call to arms against the mainstream press. Because by nerdier, he really meant better. “Plenty of pundits have really high IQs, but they don’t have any discipline in how they look at the world,” he told New York magazine last week, promising to produce a site free of such “bullshit.”

But it’s always risky to bite the hand that publicizes you, and those “bullshit”-ridden pundits have delighted in tearing into the site launch, poking holes in weak pieces’ logic and arguing that, perhaps, reporting is still best served by the deductive analysis of the opinion reporters that Silver so detests.

Still, data-driven journalism has appealing possibilities, and if the public interest could be served by FiveThirtyEight, the opportunity lies in its science section–which tackles an intersection, after all, that is underserved by the media. The big “data-driven” investigative journalism sites, places like ProPublica and The Center for Investigative Reporting, tend eschew explanations and deep dives into scientific research and instead focus on policy or business. (Similarly, Silver’s New York Times vertical focused on politics and sports.) While some of the slack is picked up by specialized press, places like the pulitzer-winning Inside Climate News, interpreting the realities of a scientific discovery requires analyses that go beyond the standard several-source standard news story, so science journalism could be well served by a vertical of Nate Silvers, “asking the right questions of the data,” as he promised in his manifesto.

But, unfortunately, though the headlines of Silver’s launch articles aspire to paradigm-shifting grandeur, their content falls flat, failing in exactly the same way as Silver’s detested opinion journalists.

It starts with the piece that headlined the section at launch: “Finally, a formula for decoding health news.” The article serves as a piece of media criticism, advocating a formula to assess whether a news report about a study is actually as groundbreaking as the journalist purports. It’s a noble task, and writer Jeff Leek, an associate professor of biostatistics at Johns Hopkins, should be up to the task. Leek’s formula is based off of Bayes’ rule, a system of correcting for measurement errors and predicting the likelihood of the findings outcome actually occurring. Correcting for subjective beliefs is a statistical tool, but Leek does everyone a disservice when he describes these priors as an
“initial gut feeling” before assigning numbers to a checklist, seemingly at random:

1. Was the study a clinical study in humans?
2. Was the outcome of the study something directly related to human health like longer life or less disease? Was the outcome something you care about, such as living longer or feeling better
3. Was the study a randomized, controlled trial (RCT)?
4. Was it a large study — at least hundreds of patients?
5. Did the treatment have a major impact on the outcome?
6. Did predictions hold up in at least two separate groups of people?

Sign up for CJR's daily email

It’s a decent list of things to keep in mind when reading news coverage of a study. The problem is, plenty of people have already produced such checklists–only more thoughtfully and with greater detail. Here’s one. Here’s another. Here’s a whole bunch.

Not to mention that interpreting the value of an individual scientific study is difficult–a subject worthy of much more description and analysis than FiveThirtyEight provides. Last January, David H. Freedman devoted 4,000 words on the subject in a CJR cover story which meticulously traced the fluctuations of findings on weight loss studies throughout the press. (You can find a checklist for evaluating health studies midway through the piece.) The problem, Freedman assessed, isn’t that the press is incendiary, but that there’s a limit to what can be gleaned from a single scientific study–so “articles written by very good journalists, based on thorough reporting and highly credible sources, take stances that directly contradict those of other credible-seeming articles.”

In short, it’s very difficult to evaluate the merits of an individual study, and Leek’s formula–which for the record is: Final opinion on headline = (initial gut feeling) * (study support for headline)–isn’t going to help the reader very much. As Paul Raeburn wrote at the Knight Science Tracker, “Leek is a statistician at Johns Hopkins. And he’s dishing out a lot of quasi-statistical nonsense.”

The rest of the science section follows the same patterns. A piece assessing the freezing weather this winter as compared to historic temperatures isn’t wrong, but it’s a confusing version of a story published by other outlets during polar vortex hype in January. Other stories, like this one, on calories burned during sex, for example, have been covered well enough by other publications–if they needed to be covered at all.

One of the dangerous things about purporting objectivity because your journalism uses data is that even data can be conveyed with prejudice. “In a perfect world the data would just speak for itself, but that’s never the case,” the economist Allison Schranger wrote at Quartz following FiveThirtyEight’s launch. “Interpreting and presenting data requires making judgments and possibly mistakes.” That’s why so many writers have been concerned about FiveThirtyEight’s climate writer, Roger Pielke, Jr, a University of Colorado professor, who ThinkProgress once called “the most debunked person in the science blogosphere, possibly the entire Web.”

Though Pielke has a deep pool of knowledge about climate change, as The Week has chronicled, he also has strong personal opinions and a history of using data to back them up against the larger scientific community. President Obama’s science adviser, John Holdren, has “accused [Pielke] of selective quotation and obfuscation,” and though Pielke claims to believe in climate change, Foreign Policy has included him on its list of climate skeptics. (He also has a shaky history with data, having once included the results of the filmmaker Michael Mann when analyzing the inflation of news coverage of a study by the scientist Michael Mann.)

Pielke’s first post for the site covers the link between climate change and extreme weather–or lack thereof: “Disasters Cost More Than Ever–But Not Because of Climate Change.” In the post, he first shows the rising rate of global disaster losses, then adjusts the figure for the rise in global GDP–showing that global disaster losses have actually flatlined. “We’re seeing ever-larger losses simply because we have more to lose–when an earthquake or flood occurs, more stuff gets damaged,” writes Pielke. Which is an interesting point, but not relevant to climate change.

The shame is, there are plenty of subjects that could use a parsing from FiveThirtyEight: Silver could suss out the actual climate impact of shipping oil by rail, or give hard numbers on how decreased public research dollars effects disease funding–a subject the The New York Times specifically said was missing “[comprehensive] tracking [of] the magnitude and impact of private science.” Even a rigorous look at how to parse out acceptable health coverage would be welcome.

FiveThirtyEight has ample opportunity to burst into this space and produce what Silver promised all along: an improvement in the level of analysis required by the general press. But you can’t take down journalism without first abiding by its best practices.

Alexis Sobel Fitts is a senior writer at CJR. Follow her on Twitter at @fittsofalexis.