A new report by the London School of Economics and Political Science (LSE) surveyed newsrooms across the globe on their use of AI. JournalismAI, an initiative from the media think-tank Polis supported by the Google News Initiative, completed the first survey in 2019. Since then, the digital media landscape has undergone significant changes with new generative AI tools such as ChatGPT and Google Bard. It has resulted in erroneous articles but also opportunities to make the news production workflow more efficient. The report, which was released two weeks ago, found that while most surveyed newsrooms have experimented with generative AI, only one-third believe their organizations are ready to deal with the challenges of AI adoption in journalism.
Over 120 editors, journalists, and media people across 46 countries and 105 newsrooms shared their experiences. Almost three-quarters of them said generative AI presents new opportunities for journalism, and 85 percent said they have experimented with generative AI tools such as writing code, image generation, and authoring summaries. One of the main motivators cited for AI integration into news production is its potential to increase efficiency and productivity, freeing up more journalists’ time to do creative work.
But over 60 percent of respondents also expressed ethical concerns about how to uphold journalistic values while using generative AI tools such as accuracy, transparency, and limiting biases. Many of these concerns revolved around the “black box” nature of AI systems and the lack of transparency about the data on which the systems are trained.
Newsrooms surveyed in the report called for more transparency from the technology companies creating AI systems and emphasized the importance of a ‘human in the loop approach.’ Tow talked with Charlie Beckett, the founding Director of Polis and co-author of the report, about how the benefits of AI are unevenly distributed across newsrooms and what the future might hold for AI in journalism. Our conversation has been edited for length and clarity.
SG: This is the second JournalismAI global survey, the first one being conducted in 2019. How did you adapt the survey to the recent technological advancements in generative AI?
CB: We adapted it partly to capture the sudden explosion of generative AI, partly because we’ve been really interested in inequalities around these technologies over the last four years. So this time along, we emphasized more global south news organizations and it was a bigger sample. But broadly speaking, it was the same. It was asking people: So, what are you doing with AI and generative AI? What kind of strategy do you adopt? What kind of risk mitigation? What kind of problems have you experienced? What do they think about it in ethical terms and their future work?
In the report, you mentioned that there is a disparity between the use of AI in the global north vs. the global south. Can you expand a bit on these differences?
The global south term is a bit of a clunky phrase. It doesn’t quite capture the complexity of this. You can have inequalities around these technologies within a particular country. The big news organizations have the resources to handle these new tools. Smaller ones don’t. These tools are built with the English language. So, there is a disadvantage for non-English language organizations. It’s about languages but it’s mainly about resources though. We know very well that these technologies emerge mainly from the West Coast of America, and sometimes from West and Northern Europe. Less so elsewhere. I spend a lot of time working with and visiting these global south organizations. I’m impressed with how they’re able to adapt these tools, especially with generative AI. So there is, in a sense, an equalizing effect: anyone can use ChatGPT, obviously, if they can afford the subscription. But the technical barriers are lower with generative AI. It doesn’t mean it’s easy or that we’ll eliminate all the inequalities. But there is at least that potential for people to be able to adapt to it.
So it’s in part a language barrier that’s making it harder for global south newsrooms to adopt AI tools?
Well it’s interesting because AI is solving that as well. The translation is just getting fantastic. And that is incredibly important for any news organization that’s trying to gather news in different languages or disseminate using different languages. So, the power of translation and reformatting opens up great potential for people to break down some of these barriers.
You mention that small newsrooms don’t always have the resources to handle new AI tools. Is that true both in the global south and in the US?
Both. Local news in America is significantly under-resourced. Luckily, it’s still much better resourced than anywhere in Sub-Saharan Africa. There’s a lot of attention being paid to local news by foundations in North America. People are trying to help out. They’re getting far more philanthropic attention than elsewhere in the world. One of the problems is your risk calculation: if you’re in a reasonably mature democracy with a mature economy like Europe or North America, it’s much easier to say, “Okay, let’s try out some new tools. Let’s have a go at creating content in different ways.” That’s much harder to do if you’re under the pressure that many global south publishers are, like political, financial, and economic pressure. They don’t have resources and neither do their consumers. That makes it hard.
Around 90% of respondents welcomed a stronger role played by universities, journalism schools, and other intermediary companies in assisting with adopting AI in newsrooms. Were there any specific examples of what such collaborations could look like?
What we’re doing at the LSE is a really good example. We’re informed by its research capabilities. We are a research project in many ways. We publish these reports and create an incredible volume of materials by working with journalists and news people to increase their knowledge transfer, to build up resources, to do innovation experimentation, and to have dialogue around ethics and editorial policy. We’re doing that from within the industry. It’s a think tank process. I think that’s really refreshing. And this is partly because the news industry as a whole has had crisis after crisis. And it knows that the individual news organizations can’t stand on their own.
There have been debates within journalism schools about whether to incorporate AI into the curriculum. Some are against it, some are for it. What’s your stand on it?
The people who say don’t incorporate it, it’s just sort of against reality. It’s already part of news work. It has been for five or ten years. Generative AI is another version of this, and it does bring risks. But it also brings incredible opportunities. The whole point of teaching journalism is to say that this is how you mitigate risks, avoid inaccuracy, defamation, do sourcing, and use good judgment. The same thing applies to this.
Obviously, you can make bad decisions with non-technologies or any other technology. You can say, ‘I want to use it to make commercial clickbait and I don’t care whether it’s accurate or not.’ And you know what, there’s a lot of journalism out there that does that already. So the idea that generative AI is some sort of Pandora’s box that’s going to commit an original sin on this pure halo journalism that we enjoy at the moment is crazy. Quite the opposite. In a more optimistic scenario, it can do all sorts of things, like help fact-check and be more efficient. Then, you have more time for the human elements like reporting, talking to people, being diligent, and doing research. All those things will add value to your journalism.
One of the ethical concerns around generative AI is its potential to spread misinformation. This also showed up in the report. Do you think it’s a valid concern?
Yes, it will make it much easier to spread bad information and to create that information in more convincing ways. No doubt about it. Yes, there will also be tools that help prevent that but it’s going to be very difficult. Just like with social media, you can’t expect a technological solution. You have to talk about literacy and about regulation. A lot of the problem around misinformation is mainstream media. The idea that mainstream media is this paragon of virtue that only produces beautiful scientific objective news is rubbish. A lot of mainstream media, politicians, and celebrities are deliberately spreading and amplifying misinformation, disinformation, propaganda, and hate speech. So yes, I’m definitely worried about it. I do think there’s a danger of exaggerating the problem and the technological source of it. And there’s a danger of exaggerating technological solutions because that’s not going to be the solution.
In the report, you point out that this is the third wave in journalism. The first wave was when journalism went online, the second was the arrival of social media. Are there some lessons from the previous two waves that we can apply to the current one?
We’ve seen it all before. We’ve seen the moral panics. When I was at the BBC when the internet came along, they created BBC Online. And everyone said, ‘What a waste of money and time.’ So there’s a lot of cliches about the cycle coming again. But I do think it happens differently. Social media was quite different than when the internet came along. Putting your stuff on the website was a two-dimensional process, literally. Social media was a multi-dimensional shift. We have to think of AI as a different category. It’s not just another wave. It’s been very slow but also all at once since we’ve had versions of machine learning for some time. Generative AI gives the impression of this accelerant. And it may well be that there’s a bit of a plateau.
There’s the usual cliche that we exaggerate the short-term impact of new technologies and often underestimate the long-term impact. I don’t know what the long-term impact is. We didn’t when the internet came or with social media. But my hunch is that legacy media will be stronger than people think and that it won’t be eradicated by ChatGPT or a variant. I do think that we are heading into a much more structured news environment where the idea of the article will be reduced much more to components, and we’ll be reformatting, reshaping, and thinking about how we assemble the journalistic experience in a much more structured way, a bit like how AI works.