Sign up for the daily CJR newsletter.
Lately, conversations about the use and abuse of artificial intelligence have dominated the news. There was Nick Lichtenberg, an editor at Fortune, who told the Wall Street Journal that he has used large language models to churn out more than six hundred stories. Then some close readers pointed out that parts of a recent Modern Love column for the New York Times, by Kate Gilgan, appeared to have been generated by AI; Gilgan admitted to using AI “as a collaborative editor.” For a different section of the Times, Alex Preston wrote a book review using AI that was found to have plagiarized a piece in The Guardian (the Times cut ties with Preston, who later apologized). Over at Wired, tech journalists discussed how they use machines in their reporting process.
Jeremy Caplan, who directs teaching and learning and the Entrepreneurial Journalism Creators Program at the Craig Newmark Graduate School of Journalism at the City University of New York, believes there’s a time and place for AI in journalism. His newsletter, Wonder Tools, highlights digital resources that can help journalists work “more thoughtfully, more carefully, more pragmatically, more efficiently,” he told me. But there’s a catch: “When AI replaces our thinking or replaces our words or acts as a substitute for us, that can lead down problematic roads,” he said. Our conversation has been edited for length and clarity.
CAG: What do you make of the criticisms leveled at writers who have admitted to using AI in recent weeks? I’m thinking specifically about the New York Times book review that plagiarized text from The Guardian, though of course there are others.
JC: As journalists, one of the key ways we can be thinking about AI is not as a tool to replace our work, but rather as a tool that can help us critique and strengthen our work. And that’s fundamentally different from asking AI to write on your behalf. We can, for example, use it to help us analyze our work for blind spots and see what we’re missing, or what we haven’t touched upon, or what facts we haven’t considered, what perspectives we’ve left out. We can ask AI to interview us, too. This is sometimes called a reverse interview, and it’s about getting us to think more deeply about something and surface our own ideas or our own understanding. And this is all, again, different from asking it to generate synthetic texts.
There are cases, as you pointed out, where that synthetic text may have its origins in someone else’s work, and that can be really problematic. I think this is a technology that’s not going away, and as we wrestle with it and explore it and experiment with it, we should keep in mind the core values that got us into this field to begin with. Periodically, though, people make mistakes. It happens in every field, and it’s not surprising that it happens in journalism as well. With new technologies, there is usually a period of adjustment and exploration or experimentation. During that period, people test the boundaries and sometimes go too far in one direction or another. That’s to be expected.
When and why did you start Wonder Tools?
I started Wonder Tools in April of 2020. I was alone in my room, as many of us were at the beginning of the pandemic, and my responsibility at work was to help my colleagues adapt to online teaching. I was also eager to help my journalism colleagues around the world as we all struggled together with figuring out how to adapt to living and working online. I had also been running the entrepreneurial journalism program cohorts for quite a while at that point, and I realized that this was an opportunity to practice what I preach and to create my own side venture, just as I’ve encouraged many other journalists to do over the years. Those factors came together with my interest in just having something creative to work on.
Part of what I wanted to identify is just the importance that tools play in our workflow. A lot of people like to think of tools as kind of a technical issue that they feel is secondary to the primary work that we do: interviewing people and doing the hard work of writing and editing. But I actually think that the tools we use are very much a part of the work that we do; they sometimes hamper it, and they sometimes accelerate it. And so I think it’s really important not to let tools become an afterthought. And especially in this era, there’s just been an explosion of ways that we can use tools and new ways in which we can use them fruitfully, productively, and creatively. There are also ways that tools can be used to shortcut processes that we shouldn’t shortcut, or to replace our own thinking or our own work. So not only are there many new opportunities, there are also new dangers and traps that we can fall into.
You mention tools can both help and hurt us. What are examples of both scenarios?
Take AI transcription tools. I think there are many cases where having a transcript so you can recall what was discussed in a meeting can be really helpful, particularly if you have lots and lots of meetings. It would be impractical to go through every one of them manually and transcribe them, or to rely exclusively on your own notes. You might be missing things, as our memories are flawed. You know, a lot of people talk about the flaws in AI systems and the biases in AI systems, and they sometimes forget that we as humans have lots of flaws in our systems and our capabilities, as well as unconscious biases of various kinds.
But there are also cases where having a transcript might reduce the extent to which we’re carefully noting important moments in a conversation. AI can faithfully reproduce what happens, but it can’t necessarily reproduce how you felt about a particular moment. There’s a risk that we might forget to apply all of our human sensibility to the conversations that we have. We have to identify where we are adding human value, and also where we can benefit from AI assistance under our direction.
A lot of journalists use AI for transcription purposes now, and that’s widely accepted. Are there any tools you feel journalists are neglecting?
There are a lot of different tools that can be quite helpful for journalists to integrate into their workflow. A lot of them help us with the back-end work, so they’re not necessarily tools that the audience is going to see, but rather the ones that help us work more thoughtfully, more carefully, more pragmatically, more efficiently, and they include things like translation tools. So we may use Google Translate, but we may also find that there are better, more contemporary tools, like DeepL, to do more nuanced translation of material that we’re reading. Using AI tools like Sourcetable for examining large datasets that our human brains—or at least my human brain—are not able to grasp can help us to identify outliers or patterns. We obviously have to go back and check and look at those data points. We have to do the human reporting and ask the questions, but the AI is often good at looking at data and giving us interesting observations to dig further into.
A tool like NotebookLM can be quite valuable in examining large sets of notes and documents, PDFs, research materials, and collections of material—including transcripts—and querying that material in ways that help us find the original source content and examine it closely. One helpful thing about NotebookLM is that it’s grounded in the material you give it, and therefore you can find citations that draw you back to the original. It lets us reexamine documents through different visual lenses, so we can generate infographics or slides or video overviews or audio overviews or data tables or mind maps to look at the information in fresh ways, with fresh eyes, which helps us ask fresh questions. These are updated versions of the tools that we’ve long used in journalism: we’ve used notebooks, we’ve used word processors, we’ve used filing systems, digital and analog. In that way, I think they are part of a long tradition of technology and journalism.
We’ve touched on documented instances of journalists using AI in irresponsible ways, but we haven’t talked about how often journalists are getting accused of doing this. Are those fears warranted?
To some extent, there’s a legitimate reason for people to be skeptical. Readers should be, in general, skeptical of information that flows through this ecosystem. And yet, the research that I’ve seen suggests that people assume that a much greater use of AI is happening within these organizations than is actually the case.
So I think what this means for us in the journalism ecosystem is that we have to work hard to be super clear about what we are doing, what we are not doing, why we are doing it, and how we are doing it. We need strong AI policies that explain to people how we use AI, and what we use it for. And it’s challenging, because in many newsrooms, we’re still figuring it out, right? We’re still trying to experiment and explore and decide what’s actually useful and what’s legitimate to use and what’s maybe not advisable to use. And so while we’re figuring things out, we already have to be communicating with the public. That’s difficult. We’re not accustomed to doing that. But there’s a broader need, within journalism, for various kinds of transparency.
What have you taken away from all the discussions on AI these past few weeks?
There’s a tendency in the news arena to think defensively about AI and the damage it can do or the dangers it presents, and that makes a lot of sense because our reputations are fragile, and there are also so many threats to the industry right now, from censorship and government pressure to changing business models and technological pressures.
But I think it’s important for us to also think creatively and optimistically about new opportunities that might be emerging. The Global Investigative Journalism Network just had a series of discussions and events focused around using AI for investigative reporting. There’s a lot of exciting potential in that arena, as well as on the business side, for us to develop new personalized and customized products for news consumers that fit how they actually want to consume news and information. There are also new AI tools that can help us market our work more effectively within this competitive information ecosystem. So I think there’s a lot of room for being creative. And we shouldn’t let our fears and our skepticism prevent us from being optimistic.
Has America ever needed a media defender more than now? Help us by joining CJR today.