the news frontier

Testing the Limits of Crowdsourcing

An experiment in automated reporting opens up the debate
February 9, 2011

When ProPublica launched its Recovery Tracker project—a massive, searchable consolidation of government data on stimulus funding in the U.S.—it got some assistance from Mechanical Turk, an Amazon marketplace that matches workers up with easily completed online tasks. In Srinivas Rao and Amanda Michel’s write-up about the best ways to use mTurk for data projects, they emphasize that they only used the service for the simplest jobs, such as re-formatting data and checking for redundancies in spreadsheets. They advise other journalists to do the same:

Make sure your project is suitable for mTurk. Can you easily deconstruct your project into tasks that can be completed independently? Use mTurk. Do these tasks require specialized knowledge? Don’t use mTurk. Are the tasks simple and quick to finish? Use mTurk. Are there multiple correct answers for each task? Don’t use mTurk.

Jim Giles, a California-based freelance journalist, isn’t taking that advice. He has teamed up with computer scientists from Carnegie Mellon on a project they’re calling “My Boss is a Robot,” to see what happens when mTurk workers attempt real journalism. Or, “journalism.” In this experiment, Giles and his colleagues will use a team of mTurkers to read, report on, and then write a short article about a new paper in a scholarly scientific journal.

Here’s how it works. Giles has already written a series of very generic questions that any science journalist might ask an expert about a new paper that’s been released—questions like: What’s new and exciting about this paper? Are there any shortcomings in the methodology of this paper? What kinds of applications might these findings lead to? After that, the process is hands-off.

The team will choose a scientific paper, and a computer program will then divide and assign the various tasks involved in generating an article about it. Several mTurkers will read the paper and try to identify the “expert” sources to contact who can answer questions about the paper. Several others will work to find those sources’ contact information. When that information is compiled, those sources will receive the automatically generated e-mail containing the generic questions. Other mTurkers will receive the sources’ answers and choose some quotes; others will incorporate those quotes into an article, combined with summaries of the paper’s findings that still others have written.

Does this sound like a disaster in the making? Probably. On the website documenting the project, Giles writes:

Sign up for CJR's daily email

We created this blog to chronicle our attempt to answer a question: can unskilled, crowdsourced labor be used to create a product that require skills, experience and insight?

That answer seems simple (no!), and Giles and his colleagues aren’t making any promises. But it’s a timely experiment, which Giles says was inspired by the parts of publishing that are already becoming automated, to mixed results. Demand Media, for instance, uses an algorithm to identify which topics readers are searching for online and which topics they can hope to sell ads against. Narrative Science is a computer program that spits out short sports articles from a set of scores and statistics.

“My Boss is a Robot” is the next logical step: the difference being that this process is transparent. They’ll show the results of each step, blog about it on their site, and welcome feedback through Twitter. The end result will be displayed alongside a professionally written article on the same topic, so readers can see how the two compare.

“These things tend to happen sort of quietly,” says Giles. “A company rolls out the system, and readers won’t really know what it is that’s behind the content that they’re consuming. The debate in journalism tends to sort of lag behind the developments in technology.”

This particular debate is as timely as it is important. For instance, Giles notes that the only communication between mTurkers and sources was the automatically generated e-mail, and no phone calling was allowed. “That to me would be a step too far,” he says. But could unskilled, untrained, non-journalists do the job as well, and as ethically, with enough strict guidelines put in place by an editor or an algorithm?

What about the impersonal, algorithmic aspect of the process? How will that affect the source/reporter interaction? Giles wonders, “I don’t know how sources will feel about being e-mailed by a piece of software—is that something that’s going to annoy them?”

It’s a trade-off, of course, to use (cheap, fast) inexperienced non-journalists armed with generic questions versus (slow, expensive) experienced journalists who can devise more specific questions. And having one person involved with the entire assignment, with working knowledge of all of the parts and how they fit together, seems like the logical way to work. But the “My Boss is a Robot” team makes the point that, if it works, mTurk could lend itself to much more significant processes. From the website:

Tasks that involve planning and creativity don’t seem to lend themselves to the platform. But maybe that’s because no one has figured out a way of breaking these more complex processes into a series of straightforward tasks. If so, many more processes — product design? medical diagnosis? — might be crowdsourced.

Now there’s a scary thought. If you think it’s weird to have a hundred anonymous Mechanical Turk workers as your local news reporter, how about having them as your doctor?

Lauren Kirchner is a freelance writer covering digital security for CJR. Find her on Twitter at @lkirchner