the observatory

Wiring Journalism 2.0

Brad Stenger on the intersection of the press and computer science
February 29, 2008

How are the media adapting to the new digital technologies that power blogs, interactive graphics, and social networks? Quickly is one answer, but the advent of digital (what used to be called “new”) journalism is more complicated than that. Last weekend, the Georgia Institute of Technology held a two-day symposium titled, “Journalism 3G: The Future of Technology in the Field.” CJR’s Curtis Brainard talked to co-organizer Brad Stenger, who is also research director of Wired‘s annual tech fair, NextFest, about the merger of computers and the press.

Curtis Brainard: There are a lot of very technical things that need to be done to accomplish this digital revolution in the media. Are these things that traditional journalists can learn and do themselves, or is this going to require a whole new subset of the newsroom that has special knowledge in computer science, coding, and programming?

Brad Stenger: There are two fronts to answering that question. One is the general question, which isn’t just why shouldn’t journalists program, but why shouldn’t everyone program? When you think about the positive effects of what just a little bit of programming know-how can do for any person in just about any field-productivity, the exposure to ideas, and what they’re able to learn and accomplish-it really makes sense for just about anyone to pick up programming on some level.

The second half is this: Is a division of labor in the newsroom okay, assuming that not everyone programs, and how does the division of labor work? Every news organization has an IT staff and one of the things that’s striking about what I saw at the conference was how IT staffs are there to maintain infrastructure; they’re not necessarily there to deal with data and to generate insight about vast amounts of data. It’s a completely different skill set. If you have someone who can do both of those things in an IT department, they’re not long for that department-they’re too valuable and too talented. So it seems like the division of labor question is being misaddressed by news organizations across the board-that IT and maintaining infrastructure is different than dealing with and processing news as data, especially for the purpose of getting insight out of it.

CB: So you believe that the best product will result from this new breed of journalist that is fluent in both reporting and writing and in creating the underlying package and distribution infrastructure?

BS: If you’ve got a journalist who is data-literate, then the division of labor with IT smoothes out a little bit and the productivity goes up. And if the journalist isn’t, then it’s frustrating. There’s the potential to have a second IT core that does more with data than with maintaining infrastructure, some sort of specialist. I wouldn’t want to rule that out, it’s something that seems like it would be valuable. But it’s hard to say if that’s a straighter line to a solution than getting journalists up to speed on computing and computation individually.

Sign up for CJR's daily email

CB: It sounds like this type of journalist that’s fluent in both halves of the operation is still pretty rare. Did the Georgia Tech conference address that?

BS: Well, we didn’t know how rare. Yeah, I think it does bear out that it is, but no one has really checked to see. And the truth of it will materialize in the next three to six months, if we see the progress in actual projects and actual things that get done.

CB: The conference seemed to revolve around five major uses for computational journalism: newsgathering, speed and workflow, social networking, interactive and participatory multimedia, and data visualization. In which of these did the audience seemed most interested?

BS: It varied person to person, and on both sides of the fence, whether it was a computer person or a news person. But the rationale for the event came from the fact that there are analogous research subjects in computer science to all of those areas in journalism. Newsgathering corresponds pretty closely to an area of computer science known as sensemaking-how do you go from not understanding a problem to understanding it. An example is Google.

CB: What else can computers enable for journalists besides searching for information?

BS: Content management. People are really good at making social connections and that’s going to happen whether there are computers or not. What computers help with is scale. It’s easy to manage hundreds or thousands of friends on Facebook, and that scale isn’t possible without some pretty heavy-duty computing going on.

CB: What about interactive multimedia and graphics?

BS: Well, it’s pretty much required that there’s some arms race now going on among news organizations to do this better. What you’re not seeing in a lot of organizations, and where The New York Times is ahead, is the productivity and the manpower it takes to support all of this work. Where the computer science comes in, it’s not so much doing a one-off information graphic. That’s a practice that news organizations have done for decades, and for a lot of the interactive info graphics that go on Web sites, it’s the same sort of production pipeline. But now you build this machine, for lack of a better term, which functions as an information graphic, but it will take yesterday’s information, and it’ll take today’s information, and it’ll also take tomorrow’s information.

CB: So this is also a paradigm shift-less retrospective, more ongoing and fluid?

BS: That’s really the essence of computational journalism-that you’re building tools that deal with streams of information. You deal with streams on the pre-production, research, reporting, newsgathering, sensemaking, insight-generating side of things, to develop news stories and find out where trends are going and what hasn’t been told to the public. And then once that’s done, there’s the final product, the public-facing side of the machinery – and once you’ve got something figured out, these sorts of machines can be built to run and run and run.

CB: You’ve said that your conference “set the stage for disruptive innovation.” How so? What will that look like?

BS: I think it’ll just look like more. You’ll have everything that you have today, it’s just that computation gives more-more opportunity, different media formats, more opportunities to generate insight, tell stories, and bring people in who can build bridges from legacy skill sets to the newer skill sets. There is no limitation to what someone is going to invent in terms of combining existing legacy interfaces and newer interfaces like Twitter, Digg, and others sites that are impacting the news.

CB: Accepting that there are no limitations, how long will it be until we stop talking about this arrangement as a point we want to get to and start talking about it as where we are?

BS: What Web 2.0 enables in software development is a really fast development process. So you can go from an idea to a working prototype to something that lots of people can use very quickly-on the order of a couple of months. What I think will happen is at some point, and we might already be a couple months into this, there will be a twelve-month window when you see an explosion of creativity and interfaces where people understand what existing tools they can leverage. And then it stands to reason that once people start seeing where the sweet spots are, you could get another boom going where it will seem like at the beginning of twelve months we’ve got the world we’re in, but by the end of it, we’ve got a completely different world. That hasn’t started noticeably yet. I’m hoping that what we did at Georgia Tech helps to push that forward, maybe even trigger it.

Curtis Brainard writes on science and environment reporting. Follow him on Twitter @cbrainard.