The persuasive power of the press release

Is printing a press release an automatic ethical lose for the Washington Post

Last week, the Washington Post eliminated a column published digitally in its science section following an article in the Knight Science Journalism Tracker in which editor Paul Raeburn called out the paper for posting university press releases describing health studies.

By printing work that hadn’t been carefully reported, Raeburn argued that the Post is diluting its brand and misleading readers. “These stories showed up in one of the nation’s leading newspapers—and in the science section, no less, where we can assume they were carefully reported,” he wrote.

He was referring to stories like this one, describing a study on how 2012 Tour de France riders’ position in the race correlated with how attractive they seemed. (Summary: Women were into the better riders.) Titled, “Women like the looks of fast cyclists,” the piece wasn’t based on the study published in the journal Biology Letters; it was a reprint of the release published by the university on EurekAlert. The Post has used the practice repeatedly.

But when Raeburn got in touch with the section’s editor, she had a different take: The pieces were clearly labeled as separate from the Post’s reported work—a supplement to their science coverage, rather than the product of it. “I have more faith in readers than I think you do,” she told him. “We put it [the source of the releases] right at the top. I do think readers are smart.” Indeed, the pieces are pretty clearly caveated. “Study Hall presents recent studies as described by researchers and their institutions,” reads a description at the top of the article page; a byline is given, not to Post staff or editors, but to the research institution that produced the release.

Still, after a wave of coverage, the Post announced last week that it discontinued the column. And, to be clear, the study descriptions were problematic. Containing no outside sources, written with hyperbolic they read, quite clearly, like press releases. (Take the cyclist study, which according to university authors “demonstrated” a link between attractiveness and athletic ability—not just suggested or correlated with, both of which would have been more appropriate phrases to show the meaning of a single study.)

But it’s unclear that the in-house study coverage likely to replace it—the kind of quick articles which are often based entirely on the press release—are significantly better than the Post just printing a labeled press release. For instance, in a blog for Discover Magazine, cited by Raeburn’s piece, Bill Andrews described the study with a witty post, but he doesn’t include sources beyond the article’s abstract and offers only one piece of analysis caveating the study, writing that “the data only point to a general trend.” Coverage in The Economist also lacks an outside source and, though the piece contains calmer language (Their conclusion: “Good looks are a reasonable predictor of outcome”) the press release describes the study in better detail. A news article on was based entirely on the press release, printing the two quotes it included from the researcher. In many ways the Post’s system is more ethical, informing the reader that they are reading something curated by the study’s authors, rather than misleading readers into believing they are reading objectively reported coverage when they are really reading a press release, quickly rewritten into prettier language by a journalist.

Raeburn also argues that the Post should use more of the Associated Press’ study coverage to fill its science section, rather than printing the university releases. But even the AP’s health coverage is a mixed bag. Last week, for instance, the Associated Press released a wire story on new C-section guidelines (republished, of course, in the Washington Post) that hit all the markers for objective health reporting: It included significant information left off the press release, including a study from the National Institutes of Health on labor trajectories and an extended interview with Lamaze president Michele Ondeck. But the AP’s coverage of a study correlating depression risk in teenage boys with the rate of cortisol in their spit was less thorough, containing only a single quote from an outside researcher who describes hormones (“‘All hormones, including sexual hormones, influence brain function and behavior,’ said Dr. Carmine Pariante”) but doesn’t evaluate the study’s significance or methods.

These examples aren’t intended to chastise news organizations for printing quick summaries of studies; they’re meant to show that writing well reported coverage studies takes time, often a resource that’s not possible on the kinds of deadlines allotted for a quick news piece.

And even with excellent reporting, health journalism relying on a single health study tells us very little; it relays the findings of an individual study, which might prove, after subsequent work and replication, to be false. The journalism that’s most useful takes these incremental findings and covers them with the nuance gained from probing deeply into a field—understanding the difference between what is known and what is merely suspected. Such journalism takes even more time. Assuming the Post was printing the column to take up Web space, saving resources from the single-study write-around to focus on more on more complex health articles, I can think of worse uses for a press release.

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Alexis Sobel Fitts is a senior writer at CJR. Follow her on Twitter at @fittsofalexis. Tags: , , , ,