the audit

Automatic approval for weak study on robot journalism

Poor reporting accepts a flimsy report as gospel
March 19, 2014

“Could robots be the journalists of the future?” asks The Guardian.

“People Think Computer Journalists Are More Trustworthy Than Human Ones,” says Vice‘s Motherboard.

“Robots have mastered news writing. Goodbye journalism,” blares Wired UK.

File these under self-fulfilling prophecies. Human reporters are supposed to discern good information from bad, and these stories are based on a flimsy academic report out of Sweden that found “readers are not able to discern automated content from content written by a human.”

The Swedish study showed two briefs, one written by a computer and one by a journalist, to 45 students. Scandinavians are almost all fluent in English, but it’s still their second or third language. On top of that, the two stories are about NFL football, a topic few Swedes know anything about.

Even odder, the two stories are about different football topics. The computerized story is a game recap of San Diego-Kansas City, the most basic kind of news story. The human-written one is a Los Angeles Times analysis about the woes of three quarterbacks.

Sign up for CJR's daily email

Worse, the study incorrectly places the subheadline of the LAT story inside the body of the story (which I’ve put in bold below), which makes it read badly:

Matt Cassel, Russell Wilson and Mark Sanchez have struggled, and their starting jobs are in jeopardy.

Their passes might sail high, but three NFL quarterbacks have landed far short of expectations.

Kansas City’s Matt Cassel, Seattle’s Russell Wilson, and the New York Jets’ Mark Sanchez aren’t the only starting quarterbacks who are struggling–there are several–but they’re the ones inching ever closer to the bench.

We’re not told why or how these particular stories were picked as representative of their respective genres or why the computer story is 192 words long but the human one is abbreviated at 143. Surely a better methodology would at least entail using stories covering the exact same event. Plus, the number of articles used—one each—is so small as to render any conclusions useless, which the study itself says:

The result of this study is based on a small and quite skewed sample. A larger sample, representing a larger part of the general audience, may yield a different result. Furthermore, the articles used may not be very representative for either software- generated content or content written by a human. The experiment comprised one article from each category, making the risk for a skewed result quite apparent.

But here’s all Wired says about any of this: “In what was admittedly a small-scale study…” Motherboard says, “the sample sizes are small.” And The Guardian doesn’t mention anything about the serious methodological flaws. This story was too good to check and the clicks were too easy to garner.

This isn’t to say the robots aren’t coming for at least some of the journalism jobs. Bots are already churning out basic copy across the industry. Bloomberg wrote the other day that, “Your Job Taught to Machines Puts Half U.S. Work at Risk.” Translated from Bloomberg-speak, that means half of existing US jobs could be automated.

Some of those jobs will surely be journalists’ jobs. But this study doesn’t show us much of anything about that.

Ryan Chittum is a former Wall Street Journal reporter, and deputy editor of The Audit, CJR’s business section. If you see notable business journalism, give him a heads-up at rc2538@columbia.edu. Follow him on Twitter at @ryanchittum.