The internet hates Nate Silver today—at least the small quotient closely following the launch of his new site, FiveThirtyEight, this week—and with ample reason. When Silver wrote, “It’s time to make news a little nerdier,” in his site-launch manifesto, he was issuing a call to arms against the mainstream press. Because by nerdier, he really meant better. “Plenty of pundits have really high IQs, but they don’t have any discipline in how they look at the world,” he told New York magazine last week, promising to produce a site free of such “bullshit.”

But it’s always risky to bite the hand that publicizes you, and those “bullshit”-ridden pundits have delighted in tearing into the site launch, poking holes in weak pieces’ logic and arguing that, perhaps, reporting is still best served by the deductive analysis of the opinion reporters that Silver so detests.

Still, data-driven journalism has appealing possibilities, and if the public interest could be served by FiveThirtyEight, the opportunity lies in its science section—which tackles an intersection, after all, that is underserved by the media. The big “data-driven” investigative journalism sites, places like ProPublica and The Center for Investigative Reporting, tend eschew explanations and deep dives into scientific research and instead focus on policy or business. (Similarly, Silver’s New York Times vertical focused on politics and sports.) While some of the slack is picked up by specialized press, places like the pulitzer-winning Inside Climate News, interpreting the realities of a scientific discovery requires analyses that go beyond the standard several-source standard news story, so science journalism could be well served by a vertical of Nate Silvers, “asking the right questions of the data,” as he promised in his manifesto.

But, unfortunately, though the headlines of Silver’s launch articles aspire to paradigm-shifting grandeur, their content falls flat, failing in exactly the same way as Silver’s detested opinion journalists.

It starts with the piece that headlined the section at launch: “Finally, a formula for decoding health news.” The article serves as a piece of media criticism, advocating a formula to assess whether a news report about a study is actually as groundbreaking as the journalist purports. It’s a noble task, and writer Jeff Leek, an associate professor of biostatistics at Johns Hopkins, should be up to the task. Leek’s formula is based off of Bayes’ rule, a system of correcting for measurement errors and predicting the likelihood of the findings outcome actually occurring. Correcting for subjective beliefs is a statistical tool, but Leek does everyone a disservice when he describes these priors as an
“initial gut feeling” before assigning numbers to a checklist, seemingly at random:

1. Was the study a clinical study in humans?
2. Was the outcome of the study something directly related to human health like longer life or less disease? Was the outcome something you care about, such as living longer or feeling better
3. Was the study a randomized, controlled trial (RCT)?
4. Was it a large study — at least hundreds of patients?
5. Did the treatment have a major impact on the outcome?
6. Did predictions hold up in at least two separate groups of people?

It’s a decent list of things to keep in mind when reading news coverage of a study. The problem is, plenty of people have already produced such checklists—only more thoughtfully and with greater detail. Here’s one. Here’s another. Here’s a whole bunch.

Not to mention that interpreting the value of an individual scientific study is difficult—a subject worthy of much more description and analysis than FiveThirtyEight provides. Last January, David H. Freedman devoted 4,000 words on the subject in a CJR cover story which meticulously traced the fluctuations of findings on weight loss studies throughout the press. (You can find a checklist for evaluating health studies midway through the piece.) The problem, Freedman assessed, isn’t that the press is incendiary, but that there’s a limit to what can be gleaned from a single scientific study—so “articles written by very good journalists, based on thorough reporting and highly credible sources, take stances that directly contradict those of other credible-seeming articles.”

In short, it’s very difficult to evaluate the merits of an individual study, and Leek’s formula—which for the record is: Final opinion on headline = (initial gut feeling) * (study support for headline)—isn’t going to help the reader very much. As Paul Raeburn wrote at the Knight Science Tracker, “Leek is a statistician at Johns Hopkins. And he’s dishing out a lot of quasi-statistical nonsense.”

Alexis Sobel Fitts is a senior writer at CJR. Follow her on Twitter at @fittsofalexis.