One of the cold, hard facts of media punditry is that no one can read everything—or should be expected to—so any assertion about a publication or any group of them is going to be just that, an assertion.

Is BusinessWeek better now that it’s Bloomberg BusinessWeek? Probably, but that’s just an impression, even if commonly held.

Was The Daily any good? I seriously doubt it. But wouldn’t it be great to know—exactly—what appeared in the publication that Rupert Murdoch launched from scratch, at great expense ($30 million, plus), as his idea of what journalism is really all about?

That’s why I like content analyses, usually scholarly efforts that seek to capture, count, and categorize published articles, the things that the public actually reads. This way, vague impressions and theories (“The media always…[fill in your gripe]”; “The media never…) are tested against cold, hard data. Did American newspapers get better after the 1950s? Everyone thinks so, but Katherine Fink and Michael Schudson demonstrate actually how and by much in a recent paper that tracks the rise of something they call “contextual journalism.”

Does the mainstream press do a good job covering major economic stories, like, say the stimulus back in the 9th century 2009? I doubt that, too, but Anya Schiffrin and Ryan Fagan provide a more authoritative answer: No. Or at least it was passive in the extreme, excessively reliant on government sources, fixated on the politics over the substance, and, yes, biased, but not perhaps in the way you’d expect. And they have numbers to back up these claims made, to be sure, in academese. (A bit of disclosure: I read an early draft of the paper and made some comments on it.)

They categorized 718 stories (this is exhausting work, believe me) and coded them according to what kinds of sources (Wall Street, academic, government) the stories quoted; whether the stories discussed substance or just the politics, and whether the articles conveyed a bias one way or another, among other things.

It was instructive to learn, for instance, that the overwhelming majority of sources quoted in news stories across publications are from the government: 78 percent in the 37 USA Today stories counted; 87 percent in the 40 Bloomberg stories counted, and, yes, 79 percent of 43 stories even in Murdoch’s Wall Street Journal. It’s just a reminder of the degree to which the government controls the discourse.

One fault of the paper, I’d say, is that it mixes together Op-Eds, which are supposed to have a bias, and news stories, which aren’t. Still, if you’re wondering which way the general sentiment was blowing, there was a bit of a surprise. Most of the coverage was neutral, but when there was a bias, it was generally negative (i.e., raising concerns about the deficit as opposed touting the plan’s job-creating aspects), with The New York Times being the unsurprising exception. Here’s the chart:


On substance of coverage, the paper faults the press for an old bugaboo: following others’ agenda rather than setting its own by, say, discussing alternatives to the stimulus (including the idea of a larger one).

The views here are in line with liberal views on the stimulus—basically, it wasn’t big enough—and those of Schiffrin’s husband, Joe Stiglitz, who is mentioned in the paper. An underlying theme is frustration with the press for not pointing out what people on the left thought was obvious.

Still, the liberal line turned out to be basically right about this kind of thing. And there’s value in knowing just how passive the economic press can be when national attention turns to a major story on its beat.

If you'd like to get email from CJR writers and editors, add your email address to our newsletter roll and we'll be in touch.

Dean Starkman Dean Starkman runs The Audit, CJR's business section, and is the author of The Watchdog That Didn't Bark: The Financial Crisis and the Disappearance of Investigative Journalism (Columbia University Press, January 2014). Follow Dean on Twitter: @deanstarkman.