the observatory

Journalism’s circuit board

Computer literacy on the rise, but technology transfer lags
February 6, 2013

Journalists and computers have gotten through the awkward, get-to-know-ya phase of their relationship, but they still have intimacy problems, sometimes failing to understand each other’s wants and needs. But they’re really trying.

That was the lesson of five years–the amount time that elapsed between the first ‘Computation + Journalism Symposium’ at Georgia Tech in 2008 and the second one that took place last week. The events were designed to highlight prospects and problems at the intersection of news media and computer science research. Much has changed in the interim. Some things have not.

The biggest, most obvious sign of progress is the improvement in journalists’ technical literacy. In 2008, we set out to have a true nuts-and-bolts discussion about how computers can improve reporting and storytelling (see, “Wiring Journalism 2.0”), but we lost a lot of time to basic questions about programming, statistics, and other fundamentals. In 2013, the literacy problem was gone. The computer scientists and engineers in the room were able to speak with journalists as they would with any of their peers.

Five years ago, people voiced frustration at the hard work of connecting journalism to computing. Now, they appear serious about getting things done, more willing to roll up their sleeves. They have transitioned, though the spirit of it varies, from grudging acceptance to purpose-filled enthusiasm for subjects like data journalism, digital storytelling, social media analysis, media economics, content interfaces, video, mobile and artificial intelligence.

The session on image manipulation was led by Dartmouth computer scientist and entrepreneur Hany Farid and former CNN foreign correspondent David Clinch who is now editorial director at Storyful, a startup that filters social media and user-generated content for news outlets around the globe.

The audience received a thorough primer on forensic image analysis techniques. Recalling doctored images from “superstorm” Sandy and a hoax video of an eagle grabbing a baby that went viral in December, Farid and Clinch walked through methods and tips for quickly identifying phony images. Reflections and shadows in photographs have to line up according to basic optical principals, for instance. Another rule of thumb: Any picture with an unusually placed shark is fake.

Sign up for CJR's daily email

In a session on the social dissemination of news led by social data scientist Gilad Lotan, the head of R&D at SocialFlow, a social media marketing company, and Eric Gilbert, a professor of interactive computing at Georgia Tech, attendees saw and learned how top-level analysts produce insight from the Twitter and Reddit social data streams. Chris Wilson, a politics reporter at Yahoo! News, asked how to do better than say, “This candidate is trending,” in an election story, so Lotan explained how social media could have provided more detail about how Ohio voters felt about Obama and Romney in the last election.

“Looking at how people are connected within that conversation reveals what the students are saying and what’s important to them, versus what’s going in with media and what’s going on with politicians,” Lotan said.

Also new to 2013 symposium was data journalism giant Philip Meyer, who is best known for applying social science methods (hypothesis formulation and testing, reproducible experiments and data gathering, statistical and qualitative evidence to support conclusions) to reporting, a process that he dubbed “Precision Journalism,” in a book describing his technique that was first published in 1973.

If computational journalism is going to follow in the tradition of precision journalism, he said last week, journalists need to apply the scientific method in their reporting, and not merely rely on computers to do the work. In doing so, he added, it’s also important they not lose sight of the power of traditional storytelling.

“Information is so plentiful,” Meyer said. “We need precision journalism and narrative journalism to come together in the same organization, maybe even inside the same head.”

Universities are revamping their curriculums to meet that challenge. Nicholas Lemann, the dean of Columbia Journalism School, which publishes CJR, said that 5 million Americans call themselves journalists of one sort or another, and it’s his mission “to produce the 5 percent that will be paid.” In an effort to do that, in 2010, he helped launch a dual-degree program in journalism and computer science with Columbia’s school of engineering.

“What can you put in somebody’s head that would enable them to enter a field with no barrier to entry, where there’s no professional credentials, and establish themselves as someone who has economic value?” Lemann asked. “Computational journalism is an incredibly appealing answer to that question. We need to go from turning out a graduate with a commodity skill set to turning out someone with a high value skill set.”

Eager to foster progress in the field, academic sponsors helped the second ‘Computation + Journalism Symposium’ forgo industry sponsors, which provided some funding in 2008. Financial backing came from the National Science Foundation and a group of media-innovation labs at Georgia Tech, Columbia, Stanford, Northwestern and Duke.

These research centers are becoming the core of journalism’s R&D community, but one thing that hasn’t changed since 2008 is the news industry’s dismal record of transferring technology. University labs and startup companies have developed all kinds of tools to improve news production, very little of that work has made its way into newsrooms or media products.

Sam Gassel, who works for Turner Broadcasting and who was instrumental in creating CNN.com, told me that the open-ended relationships, without concrete deliverables, weren’t a viable starting point. Emily Bell, the director of Columbia’s Tow Center for Digital Journalism (who sits on CJR’s board of overseers), had a more dire warning:

I don’t think you’ll be able to survive or have a sustainable future as a news organization unless you have an efficient way of technology transfer. It’s still going to be an industry of journalists and storytellers but it’s a technology-driven industry.

The divide between computing research and journalism is closing, but some of the same issues come up in 2008 and 2013. Chief among them is what to do about capital-T “Truth.” It’s an issue that comes up when even an enormous volume of systemic computational analysis fails to deliver certainty, only probability.

“If you say to somebody in an article there’s a 40 percent chance that this person is corrupt, you get sued,” said James Hamilton, an economist and public policy professor at Duke. But even when the odds are low, such statistical insights can provide useful tips on which leads to pursue and where to dig for a story, and Hamilton suggested that one could launch a news service offering data-driven tips that “that would be an intermediate product.”

Such a service would be an example of the technology transfer that research centers and newsrooms so badly need. Systems built to help journalists won’t ever be perfect, but they don’t need to be, as long as human innovation and expertise remain part of the equation. One day, it’ll be a beautiful marriage.

Brad Stenger was an organizer of the 2013 Computation + Journalism Symposium with Irfan Essa, Mark Hansen, Ian Bogost, and Nick Diakopoulos. Stenger is a journalist and researcher living in New York City.