Sign up for The Media Today, CJR’s daily newsletter.
This past weekend, the Brown Institute at Columbia and Hacks/Hackers, a nonprofit, put on a three-day event called the Open Source AI Hackathon. The idea, according to Burt Herman, the board chair of Hacks/Hackers and one of the founders of the organization, was to get journalists and coders into a room to talk about what they could build together. There were students, reporters, engineers, and AI-curious participants of different stripes. Some came with specific ideas about things they wanted to build. Others just wanted to find people with complementary skill sets and learn about building tools with AI.
Much has been made of the impact that AI-powered technologies could have (and, in some cases, are already having) on newsrooms. The technology has raised concerns among journalists about whether text-generating machines could produce—or at least passably mimic—what human professionals do. But there is something different about the challenge for the news media, an industry already casting around for any kind of foothold. The thesis of the Hackathon was that large-language-model tools actually have many potential applications in journalism. Broadly, the projects fell into two categories: those exploring the ways AI can be used as a journalistic or investigative tool and those experimenting with new ways of delivering media to consumers.
“Technology is how people consume media,” Herman said. “In some ways, we’re still basically printing paper on the internet—writing x thousand words and putting it on a page.” There’s a place for that kind of traditional journalism, he said, but there are so many more possibilities. “I think about how many more things we can do now, because you have this device that’s interactive.”
The event was capped at around a hundred participants because of limited space. On Friday, they gathered for dinner and a pitch session at the Brown Institute, housed within the Columbia Journalism School in a room with long wooden tables and an elaborate AV setup. Over dumplings, they heard project proposals: a bot to monitor the agenda at local council meetings, so as to flag newsworthy items; data visualization for mapping influencer networks; a system to read story drafts and identify potential gaps in reporting.
The next day, participants sorted into teams and got to work. Available for consultation were a handful of engineers from two additional cosponsors, Hugging Face and Codingscape, companies whose models could be used by participants in their tinkering. This, along with the opportunity to find collaborators, seemed to be the main appeal of the event. There was no competitive element and no prize money; there was ample food (though the coffee ran out quickly). Â
In the back corner of one of the rooms used for the Hackathon, I met Melanie Evans, a veteran reporter who writes about hospitals and healthcare for the Wall Street Journal. Because her work so often intersects with data, Evans had become interested in the use of computational tools in journalism. She set up a Google Alert for Hacks/Hackers and learned about the Hackathon that way.
When I sat down with her group, Evans was doing some programming practice exercises on her laptop while the more experienced coders on her team worked intently on the tool they had designed together. Her part, Evans explained, had been helping the team think through how their idea could be most useful to a working journalist like her. Evans’s research often involves poring over complicated academic studies. Sometimes she doesn’t know whether a paper is relevant or useful until she has already spent significant time reading it. So the group decided to build a tool that could produce article summaries, with language customized to a particular journalist’s level of understanding of the topic at hand. The tool would also help surface studies based on newsworthiness, which Evans had helped the group define: Is a given study novel? Does it have the potential to affect a large number of people? Is it cited frequently by other studies? Does it reference wrongdoing or offer a viable solution to a problem?
Amina Mehti, a soft-spoken but enthusiastic member of the team, explained that one of the technical challenges of building this kind of tool is teaching it to detect cues about these parameters. Mehti, twenty-three, is currently pursuing her MFA in design and technology at Parsons. She and a classmate joined the Hackathon after hearing about it in an email from a professor. One of the things that excites her about large language models is that they can remove barriers for people with fewer technical skills by making it possible to give commands in plain English. Mehti can code, but she said that these days you can do a lot just by understanding the logic involved in giving commands to a computer. An application like ChatGPT, Mehti explained, can help take care of the syntax, allowing her to focus on the more creative aspects of a project. “I’m a designer,” she said. “I’m more of a thinker and strategist. I look at the functionality of the code that we’re building.”
This was a sentiment I heard often throughout the weekend: that large language models make programming more accessible. I mentioned to the team that Mark Hansen, director of the East Coast branch of the Brown Institute (the West Coast outfit being at the Stanford School of Engineering), was teaching students in his computational journalism class to use ChatGPT to help them write code.
“My heart sank a little when you said that,” said Evans, her practice exercises still open on the screen. “I’m really enjoying learning to code. It feels like the end of an era, though also the beginning of a new one.”
There is, of course, still value in technical skills, as evidenced by the tremendous amount of coding that needed to happen to bring the team’s program to life. But Mehti believes AI will continue to chip away at the more mundane aspects of the process, which is why she sees so much promise in the technology. “It gives you the freedom to do the things that people do best and machines could never do,” she said. “Like imagination, creativity, and innovation.”
Like Mehti, many attendees of the Hackathon seemed rather bored by the idea that AI would be another nail in the coffin of journalism. Sure, Mehti conceded, it can be used in inappropriate ways. “A machine’s fundamental nature is very binary and calculative,” she said. “It can’t substitute emotionality, which is so fluid.” That doesn’t mean, however, that the technology is fundamentally threatening. “I think that the best applications of AI are to replace the robotic parts of human life.”
“Within limits!” Eric Grachev, a recent computer science graduate from CUNY Hunter College and another member of the team, chimed in.
After the Hackathon, I asked Hansen and Herman about the open-source models many of the participants used to build their tools. “Open source” means users have direct access to the code, which allows them to run it on their own machines and, to some degree, to make customizations.
“When you make that precious FOIA request, and that bundle of documents comes back, the first thing you don’t want to do is give them away to somebody else,” Hansen said, using the acronym for the Freedom of Information Act. He explained that using tools through companies like OpenAI requires journalists to upload potentially sensitive documents to a third-party server. “Being able to run a model on your own means that you don’t have to give your data away,” he said.
According to Herman, another advantage of open-source AI is that it allows transparency into, and some control over, how the model you’re using is trained. “You know what data is used,” he said. Not only does this offer insight into whether a model is suited for a particular task—“it aligns with the ethos of journalism,” Herman said. “We’re not working in secret.”
But there is another reason to empower individuals to develop the tech themselves, according to Hansen. “The capacity to evolve a model and the capacity to tailor it means that there are more players than just the super-big companies,” he said. “And I think that it’s important that there’s somebody else setting expectations.”
Other notable stories:
- Yesterday, The Free Press, a site founded by the former New York Times columnist Bari Weiss, published an essay that excoriated NPR for moving to the left and abandoning curiosity and “viewpoint diversity” in its newsroom in recent years, singling out the broadcaster’s reporting on the Mueller probe, Hunter Biden’s laptop, and the origins of the pandemic for particular criticism. If the essay wasn’t surprising, the identity of its author was: Uri Berliner, who works as a senior editor on NPR’s business desk. In response, Edith Chapin, the broadcaster’s top news executive, strongly defended its output, while several NPR journalists questioned whether they can trust Berliner going forward given his recounting of private editorial conversations. NPR’s David Folkenflik has more.
- In related news, the media reporter Paul Farhi notes that NewsGuard, a company that rates the credibility of news organizations, has downgraded its score for the Times, which used to be perfect. NewsGuard still considers the Times to be “generally credible” but no longer believes that it meets the company’s standards for “separating news and opinion,” arguing that the paper frequently publishes opinionated content in its news section without flagging it as such, that “derision of Trump courses through basic news stories,” and that “an impression of partisanship lingers, especially among conservatives.” Farhi has more details in a thread on X, which you can find here.
- In media-jobs news, Fortune appointed Anastasia Nyrkovskaya as its chief executive, making her the first woman ever to run the publication. Elsewhere, Axios reports that Campbell Brown, a former news anchor who previously oversaw partnerships with news organizations at Facebook, will be a senior adviser to Tollbit, a startup that aims to broker deals between media outlets and AI companies. Axios also reports that Puck has acquired Artelligence, Marion Maneker’s newsletter on the global art market, from Substack—the first time Puck has bought a newsletter rather than building one itself.
- Yesterday, Robinhood, a financial-services company that operates a trading platform, launched Sherwood News, a new media outlet, led by the veteran journalist Joshua Topolsky, that will operate independently of Robinhood and cover a range of financial and cultural subjects. Topolsky “sees a market opportunity for a news brand to cover business in social media–first formats and a voice native to a younger generation of investors,” Sara Fischer reports for Axios. The site already has some three dozen staffers.
- And the Daily Beast’s Roger Sollenberger and Mini Racker report that Texas senator Ted Cruz, who is up for reelection this year, appears to have “turned a super PAC supporting him into a media company” by affiliating it with a podcast he hosts in partnership with iHeartMedia. Campaign finance experts have raised questions about the arrangement, the Beast reports, noting that Cruz is a “notorious Federal Election Commission troll.”
ICYMI: #FreeAlsu
Correction: The names of Hugging Face and Codingscape have been corrected.
Yona TR Golding was a CJR fellow.