Journalists and computers have gotten through the awkward, get-to-know-ya phase of their relationship, but they still have intimacy problems, sometimes failing to understand each other’s wants and needs. But they’re really trying.

That was the lesson of five years—the amount time that elapsed between the first ‘Computation + Journalism Symposium’ at Georgia Tech in 2008 and the second one that took place last week. The events were designed to highlight prospects and problems at the intersection of news media and computer science research. Much has changed in the interim. Some things have not.

The biggest, most obvious sign of progress is the improvement in journalists’ technical literacy. In 2008, we set out to have a true nuts-and-bolts discussion about how computers can improve reporting and storytelling (see, “Wiring Journalism 2.0”), but we lost a lot of time to basic questions about programming, statistics, and other fundamentals. In 2013, the literacy problem was gone. The computer scientists and engineers in the room were able to speak with journalists as they would with any of their peers.

Five years ago, people voiced frustration at the hard work of connecting journalism to computing. Now, they appear serious about getting things done, more willing to roll up their sleeves. They have transitioned, though the spirit of it varies, from grudging acceptance to purpose-filled enthusiasm for subjects like data journalism, digital storytelling, social media analysis, media economics, content interfaces, video, mobile and artificial intelligence.

The session on image manipulation was led by Dartmouth computer scientist and entrepreneur Hany Farid and former CNN foreign correspondent David Clinch who is now editorial director at Storyful, a startup that filters social media and user-generated content for news outlets around the globe.

The audience received a thorough primer on forensic image analysis techniques. Recalling doctored images from “superstorm” Sandy and a hoax video of an eagle grabbing a baby that went viral in December, Farid and Clinch walked through methods and tips for quickly identifying phony images. Reflections and shadows in photographs have to line up according to basic optical principals, for instance. Another rule of thumb: Any picture with an unusually placed shark is fake.

In a session on the social dissemination of news led by social data scientist Gilad Lotan, the head of R&D at SocialFlow, a social media marketing company, and Eric Gilbert, a professor of interactive computing at Georgia Tech, attendees saw and learned how top-level analysts produce insight from the Twitter and Reddit social data streams. Chris Wilson, a politics reporter at Yahoo! News, asked how to do better than say, “This candidate is trending,” in an election story, so Lotan explained how social media could have provided more detail about how Ohio voters felt about Obama and Romney in the last election.

“Looking at how people are connected within that conversation reveals what the students are saying and what’s important to them, versus what’s going in with media and what’s going on with politicians,” Lotan said.

Also new to 2013 symposium was data journalism giant Philip Meyer, who is best known for applying social science methods (hypothesis formulation and testing, reproducible experiments and data gathering, statistical and qualitative evidence to support conclusions) to reporting, a process that he dubbed “Precision Journalism,” in a book describing his technique that was first published in 1973.

If computational journalism is going to follow in the tradition of precision journalism, he said last week, journalists need to apply the scientific method in their reporting, and not merely rely on computers to do the work. In doing so, he added, it’s also important they not lose sight of the power of traditional storytelling.

“Information is so plentiful,” Meyer said. “We need precision journalism and narrative journalism to come together in the same organization, maybe even inside the same head.”

Universities are revamping their curriculums to meet that challenge. Nicholas Lemann, the dean of Columbia Journalism School, which publishes CJR, said that 5 million Americans call themselves journalists of one sort or another, and it’s his mission “to produce the 5 percent that will be paid.” In an effort to do that, in 2010, he helped launch a dual-degree program in journalism and computer science with Columbia’s school of engineering.

“What can you put in somebody’s head that would enable them to enter a field with no barrier to entry, where there’s no professional credentials, and establish themselves as someone who has economic value?” Lemann asked. “Computational journalism is an incredibly appealing answer to that question. We need to go from turning out a graduate with a commodity skill set to turning out someone with a high value skill set.”

Brad Stenger was an organizer of the 2013 Computation + Journalism Symposium with Irfan Essa, Mark Hansen, Ian Bogost, and Nick Diakopoulos. Stenger is a journalist and researcher living in New York City.