In 1899, the New York Herald and Associated Press used Guglielmo Marconi’s wireless equipment to cover the America’s Cup yacht race. Credit: Wikimedia Commons
Tow Center

Q&A: The wireless telegraph changed journalism. AI will change it again.

Artificial intelligence tools will likely change the face of news and the media industry, says the AP’s Aimee Rinehart.

September 26, 2023
In 1899, the New York Herald and Associated Press used Guglielmo Marconi’s wireless equipment to cover the America’s Cup yacht race. Credit: Wikimedia Commons

When OpenAI released its large language model (LLM) chatbot ChatGPT to the public in November last year, it sparked a fresh wave of hype and excitement about new uses and potential dangers of artificial intelligence (AI) in a range of industries, including journalism. Since then, news organizations have jumped into action to prepare rules on ethical usage of these technologies. This month saw the release of an Oxford Internet Institute (OII) study of the journalistic AI policies in fifty-two newsrooms around the world (which has not yet been peer reviewed). After analyzing the policies, the authors said the media industry’s “self-regulation on AI seems to be well underway” and found “a surprising degree of uniformity” across twelve countries. It compliments a project spearheaded by Nick Diakopoulos, professor in communication studies at Northwestern University and a Tow Center fellow, on generative AI in the newsroom, which is an ongoing effort for media organizations to “collaboratively figure out how and when (or when not) to use generative AI in news production.”

But the OII authors, which included co-author Felix M. Simon, an OII doctoral researcher and Tow Center fellow, cautioned in the paper that there are still blind spots for journalists to address. Newsroom AI guidelines tended to miss critical areas such as enforcement and oversight of rules, potential for newsroom’s technological dependency on big tech companies, audience engagement in AI decisions, sustainability in AI supply chains, and handling human rights issues like labor exploitation, data colonialism and workplace surveillance. As generative AI tools continue to develop at pace—Google last week announced a major expansion of its Bard AI chatbot—journalists need to sharpen their AI guidelines to operate in this new environment.

One person helping newsrooms incorporate these tools is Aimee Rinehart, senior product manager for AI strategy at the Associated Press (AP). After a career working at news organizations including the New York Times, Wall Street Journal and First Draft, Rinehart is co-instructing a course at the University of Texas at Austin’s Knight Center. It looks at how newsrooms can effectively and ethically incorporate generative AI. The goal for the course is to “take down the temperature of the angst and turmoil over this topic, and I really want to give—in plain language—how this technology works and how we can work with that technology,” Rinehart told me. “We need to come together as an industry to figure out where our lines are, where we will not cross, and best practices for when we do want to use this technology.” The conversation below has been edited for length and clarity.

JB: You’ve identified four key areas that news organizations can incorporate AI: newsgathering, production, distribution and on the business side. Can you tell us more about this?

AR: When you talk to anyone in a newsroom and you ask them, “Where do you need help?” they will just gesture in all directions. For us, it’s a way to break down that process and say, “Okay, in your newsgathering part, where’s your reporting team getting tripped up? What is causing them frustrations, delays, tedious work?” Then you isolate that area of complaint or frustration and see if there’s a tool for it. Or maybe no tool at all, maybe it’s a matter of redesigning the workflow in general; sometimes there’s no need for a tool, but other times, it’s like, “Oh, you need basic process automation,” nothing fancy. It’s not until we’ve exercised every other potential before we move onto generative AI.

So what are some examples for newsgathering in particular that news organizations are starting to incorporate and explore?

For newsgathering, I think most modern reporters now use a transcription service. Oftentimes, it’s Otter.ai, because it’s one of the cheaper versions, you might even be using it for this interview. It makes the transcription process so much easier than the old days of a foot pedal and a tape that could get mangled. Then there’s other tools: third party services like Samdesk or Data Miner or NewsWhip, where you can identify keywords that you’re looking for across the internet. For years now, data journalists have been using artificial intelligence techniques around analyzing and processing large datasets. The Panama Papers came together because they were able to use artificial intelligence to identify key words through a mountain of information.

Your research illustrated a growing gap between local and national newsrooms in the use of AI, which in your role at AP you’re hoping to shrink. Can you talk a bit more about that?

Large global and national newsrooms are well-resourced. They can take chances, and a risk [gone wrong] might not mean total devastation and failure. But for a local newsroom, adding any risk to their already risky [financial] situation is just not palatable. Local newsrooms have not been in a dream position in a very long time, they don’t wake up and say, “What can we do differently today? How could we make our journalism go faster and farther?” It’s not like that. It’s down to shoveling coal—they’re inundated with information about their community, partially because other newsrooms around them have closed, and then internally they are likely dealing with fewer reporters and editors. That combination is really hard for the journalists who remain and doesn’t put them in this position to experiment and reach for the stars every day. They just need to get the job done. Automations and tools that include AI can be very helpful in helping the journalists who remain in the newsroom because they really need our help—that’s where technology could be used for good.

Speaking to people in the industry, what are some of the concerns or objections you hear from people about using AI in newsrooms?

Part of it is people don’t like to think that a machine is going to take their job away. So job security, job longevity, relevancy in their topic, they’re worried about that being outsourced. Generative AI sets all kinds of new buttons off, because the training data and the output is a black box. For journalists, sourcing is number one for the job—”Where did you get your information from? How do you know what?—and generative AI tools can’t tell us those things. So automatically, journalists get very keyed up about that. Also, for newsroom leaders who are responsible for keeping the lights on, the fact that these [tech] companies have taken and scraped all of their information—without permission or licencing—is egregious. It’s not cheap to produce news, it’s often not safe to produce news, there’s an incredible amount of risk and ownership and drive that newsrooms take on. That hasn’t been compensated for. Newsrooms deserve compensation, especially as these AI tools will eventually be licenced out to make money.

One criticism of AI is that it will lead to news organizations—which are already dependent on big tech companies for news distribution—becoming dependent on Silicon Valley for news production, but without the agency to steer decisions themselves. What do you make of that concern?

Yeah, I mean journalists have been on the backfoot with big tech since the mid-nineties. It’s not a position I’m comfortable with, I’d really love to change that proposition. I think the industry needs a large language model. I don’t know how much that would address that ownership question, but I hope it would move it in a more positive direction that favors newsrooms and not big tech. We have to build it in order to test that out. You can’t obviously remove the probability factor of the [unfactual] hallucinations in a large language model, but we could try and license quality content that is more representative than what is out there right now in existing LLMs.

You’ve also served on the steering committee of Partnership on AI, which promotes responsible use of artificial intelligence. Do you feel like the leaders of news organizations around the world work together enough collaboratively to discuss guardrails on AI?

Nicholas Diakopoulos and Felix Simon have had two different reviews of AI standards in newsrooms all over the world, and they all have similar common themes. Transparency is a big one: if you use any aspect of a tool that includes AI, that you’re transparent about it, for instance in the byline or the tagline of a story. AP has released its standards on generative AI for journalists to the public. And some of the AP internal guidance is really amazing, I’m hoping that eventually gets shared out. Because to me it was really inspiring in terms of understanding the issue from a reporting angle and on how to keep sources and newsroom integrity safe in the face of these chatbots or generative tools.

Given the easy accessibility of generative AI tools currently, there’s potential to industrialize mis- and disinformation. You previously worked at First Draft, helping journalists with these reporting challenges. What worries you most about AI’s impact on public discourse?

I think we’re going to see another level of sophistication in terms of mis- and disinformation. Next year is a big election year in the US—it’s the national election but there’s also a lot of local elections. Because this technology is often free or low-cost and very accessible to anyone—you don’t have to be a programmer—I’m worried about down ballot races that people aren’t paying attention to. For instance, generative AI used for robo-calls, or misrepresenting your opponent, or all the traditional tactics that politicians have taken but now it’s faster and cheaper. That concerns me. We don’t have enough people locally to identify this as generative AI being used for robo-calls or image manipulation or video manipulation. We’re already seeing it, with the national political action committee for [Florida governor] Ron DeSantis using audio and photos to misrepresent [former president] Donald Trump.

I think of next year as perhaps the first AI [presidential] election. Do you think newsrooms are ready as we enter the election cycle in earnest?

I hope so. I don’t think you can walk ten-feet anymore without hearing about AI, so I do think at least it’s on the radar. Maybe people, if they see something like the Pope in a puffy white jacket, might think, “Hmm, could that be true?” My hope is it’ll be on more journalists’ radars. But there isn’t a reliable generative AI image detector. Every time there’s an update or an advance in that technology, your identification tool has to match it. So it remains to be seen. I think there aren’t enough verification skills in newsrooms. At First Draft we tried to build that up. But people sometimes still think of that as a social media desk job— but that skillset really needs to be distributed across the newsroom.

You’ve been working on journalism and the internet now since 1996, and seen many different technological changes pervade newsrooms. I wonder what sense you have of where we’ll be in ten years—how do you think AI is going to change newsrooms forever?

It stands to reason that it will change the way we operate. I think it will build efficiencies in workflows. Also, one thing I’m really puzzled by is how people will come to want to get information—how does that technology inform the way people come to a news site and want to engage? If we go back to the telegraph, which the AP pioneered the use of in reporting in 1899 [when the New York Herald and Associated Press used Guglielmo Marconi’s wireless equipment to cover the America’s Cup yacht race], the telegraph gave us the inverted pyramid news structure. Because if the technology conked out mid-message, at least you’d have the Who, What, Where, When, Why and How. The technology really informed the design of a piece of information. 

So how will AI inform what people expect when they come to a new site? People want answers, not necessarily a bunch of links. That’s going to be another big shake up in the next six to twelve months—generative search [engines] will likely upend how people come to your information site. It’s a one-two punch. Newsrooms have already seen a stark drop in referrals from social media platforms, and then soon search engines won’t [be referring to news sites as much]. So it’s very important that newsrooms figure out how to bring audiences to them, whether that’s through a newsletter, a podcast or an app. We have to be better stewards of building out our own platforms and not relying on third-party apps.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »