My father, who spent two years in Plattsburgh, New York, as an Air Force doctor during the Vietnam War, used to say that the military had to be able to pick up clerks at a desk in the US, fly them to Saigon, and have them continue typing the same sentence as if they were still sitting in their US air base.
I think it must be similar to end one day as a politics, business, or real estate reporter, then show up at work—from home, at the moment—and be told that you’re on the health beat, covering the coronavirus pandemic. The language may be the same, but the jargon is completely different. What is a confidence interval? (It’s sort of like a margin of error in a poll, but not exactly.) What is a P value? (Even scientists struggle to explain those, so I won’t try here.)
I imagine that the dread a newly transferred coronavirus reporter feels when faced with a PDF filled with statistics is the same as I would feel if—as a career-long doctor-turned-medical journalist—I was suddenly assigned to cover the statehouse. I mean, what the hell is cloture? “How a Bill Becomes a Law” was great when I was a kid, but I’m reasonably sure it would serve me as well covering Congress as an English major’s junior high biology class would serve them covering epidemiology.
That’s particularly true when the science is moving at dangerous speeds—which, as Adam Marcus and I wrote for Wired in March, it currently is. Research findings are never vetted as carefully as many scientists, medical journals, and others would like us to think they are—hence one of the reasons for Retraction Watch, which Marcus and I created ten years ago—but now they are being posted online essentially without any review at all, testing, as the New York Times wrote in April, “science’s need for speed limits.”
One could argue that a pandemic like COVID-19 shifts the calculus on the benefits of embargoes, but that only holds if reporters use the additional time to digest findings and call experts unrelated to the study for comment.
Those findings are appearing in papers by the hundreds on what are known as preprint servers such as bioRxiv and medRxiv. (The awkward capitalization is an homage to the original preprint server, arXiv, created in 1991 mostly for physicists and mathematicians.) Researchers submit their work for posting on such servers before peer review, and often update manuscripts before they are “officially” published in journals. That means they are being disseminated to scientists more quickly than would be possible if they went through weeks or months of peer review, even if journals are speeding up that process dramatically during the pandemic.
Speed can be a good thing, if researchers understand the context. But it also means that reporters eager for scoops are seizing on what sound like important and impressive findings that are likely to soon be meaningless. For decades, the embargo has functioned as an artificial speed limiter on when a lot of research reached the public. In exchange for prepublication access—usually a matter of several days, to provide time for reading and reporting—journalists agree to a particular date and time for the release of news. Such embargoes (which are common to journalism more broadly) are ubiquitous in science and medical reporting.
For many years, prompted by Vincent Kiernan’s Embargoed Science, I’ve thought that the downsides of embargoes are greater than their benefits. Many journalists essentially hand over control of what they will cover and when to journals, whose interests lie in drumming up publicity and recruiting splashy manuscripts from researchers. Journals have also typically scared researchers out of talking to reporters before the work appears in their pages, saying—or sometimes just strongly implying—they wouldn’t consider their manuscripts for publication, a risk in a publish-or-perish world.
One could argue that a pandemic like COVID-19 shifts the calculus on the benefits of embargoes, but that only holds if reporters use the additional time to digest findings and call experts unrelated to the study for comment. And the reality is that journalists don’t have as much time as they need.
What is a newly transferred medical reporter to do? I would hope a veteran statehouse reporter would take me aside to give me a tour of the beat, so I’ll try to do the same here, in a framework I’ve used in a guest lecture I’ve given at Columbia Journalism School for some years at the request of adjunct Charles Ornstein, a veteran healthcare reporter and editor at ProPublica: “How Not to Get It Wrong.” See you on the front lines.
Always read the entire paper
Press releases about studies—whether they are from universities, journals, or industry—use spin. Study abstracts aren’t much better. Most publishers are more than happy to provide PDFs of paywalled papers, if need be, as are authors. And such access is a benefit of membership in the Association of Health Care Journalists, which offers a wealth of other resources, from a listserv to tip sheets to fellowships, that make for career-long learning. (Disclosure: I’m volunteer president of the board of directors.)
Ask “dumb” questions
Many science reporters—particularly those who are trained in science—are afraid that sources will judge them for asking what seem like basic questions, only to end up with a notebook full of jargon without really understanding the story. Remember that your primary loyalty is to your readers and viewers, who don’t know the jargon, either.
Ask smart questions
All right, so you want to impress a source. Fine: use that instinct to dive deeper. Where (if at all) was the study published? Was it in humans, or animals—where results often don’t translate to clinical practice? Was the study designed to find what it purports to find? Did the authors move the goalposts—in scientific lingo, endpoints—midstream?
Read the discussion, look for the limitations
Good journals won’t let authors get away with leaving out the limitations of their work. Was the sample size skewed? Could something that the study couldn’t control for render the results less meaningful? The limitations are there, in all their glory: selection bias, measurement bias, recall bias, lack of a control group, and more. Look for them.
Figure out your angle
It’s fine to write about a preliminary study because it’s likely to lift the prospects of a local company, or because the science is fascinating. Just don’t make it sound as though the findings are a cure for coronavirus infections.
A treatment may work perfectly, but only for a small population. Or the FDA may approve a drug, but only for limited indications. Don’t imply that millions suffer from a disease when only a handful do—nor that a condition is life-threatening when it’s just a nuisance.
Okay, so you went into journalism because you hate math. But your readers and viewers can’t make better decisions if all you tell them is that “patients improved if they took the treatment.” By how much? Would you choose a car because someone told you it was cheaper, but not how much cheaper?
What are the side effects?
Every treatment has them, and you’re unlikely to find them listed in a press release or abstract. Dig in to Table 3 or 4. Or ask another “dumb” question.
Who dropped out?
Very few studies end with the same number of people with which they started. After all, people move, or get bored of being in a study, and they’re not lab mice, so they’re free to do what they want. But sometimes more people drop out than average, and that could be a cause for concern. Find out if the authors have done what is known as an “intention-to-treat analysis” to keep themselves honest. (I even wrote a letter to a journal about a case where they didn’t, after editing a story about a particular study. Yes, I’m a geek, and no, you don’t have to go that far.)
Are there alternatives?
The fact that a new drug worked wonders sounds great, until you learn that there was no control group, so you have no idea what would have happened if the participants hadn’t had the drug. The same goes for observational studies that claim a link between a particular diet or lifestyle and health. As I often say to my students, it’s difficult to smoke while you’re on the treadmill. Healthy behaviors tend to go together.
Who has an interest?
Read those disclosures at the ends of papers, keeping in mind that while most clinical trials are funded by industry, such ties are linked to a higher rate of positive results. And these conflicts can become stories with impact themselves, as Ornstein and Katie Thomas of the New York Times have shown.
Don’t rely only on study authors for the whole story
The same way you’d want to get outside comment on any other story, seek the thoughts of experts unrelated to a study or finding. I explain how I do that here, in a blog post for science writer extraordinaire Ed Yong, now at The Atlantic, in his early days at Discover.
Use anecdotes carefully
Narrative ledes, not to mention profiles, can be very powerful. But they can leave an impression that a treatment works—or injures—when it doesn’t. As I tell my students, it’s difficult to interview people buried in a cemetery. If you’re only including success stories, you’re not painting a full picture.
Watch your language
Correlation is not causation. (Or, as my father used to say, “True, true, unrelated.”) Don’t use words like “reduce” or “increase” when all you know is that some factor is correlated with another. “Linked,” “tied,” or “associated” is more accurate. For fun with spurious correlations, read Tyler Vigen.
Some final words: Despite my years in medical school, I need frequent refreshers on how to read medical studies. For those, I recommend getting to know a biostatistician—particularly during a pandemic.
THE MEDIA TODAY: The problem of seeing the pandemic through a partisan lensIvan Oransky is vice president of editorial at Medscape, distinguished writer in residence at New York University’s Arthur L. Carter Journalism Institute, cofounder of Retraction Watch, and president of the Association of Health Care Journalists.