Join us

Fighting the Machine

Journalists across the United States are fighting for contracts that address AI use: “We don’t want it to be done in our name, literally.”

April 30, 2026
Adobe Stock / Illustration by Katie Kosma

Sign up for the daily CJR newsletter.

On a chilly day in early April, outside the ProPublica offices in Lower Manhattan, dozens of union members staged a daylong strike—the first of its kind authorized at a major news organization to address, among other concerns, how AI would be used with its work. On the sidewalk, Agnel Philip, a data reporter at ProPublica and unit chair for the ProPublica Guild, told me that so far ProPublica has been conservative with AI. But “while the technology is new, and while places are sort of figuring out how they want to use it,” he said, “we want a seat at that table to make sure that its implementation is done well.” 

Across the United States, journalists have been fighting for a say in how their newsrooms use AI, with varying degrees of success. On April 15, unionized staffers at EdSource, a nonprofit outlet covering education in California, held a lunchtime rally, demanding that their contract include protections against AI use, such as the right of reporters to remove their bylines from stories that involved the use of AI without their consent, and that the union, in addition to management, needs to approve the use of generative AI tools in the newsroom. A day later, Times Guild members brought newspapers to an all-company meeting at the New York Times that had been stenciled in bright red with the phrase REAL A.I. GUARDRAILS. And lately, employees at McClatchy newsrooms have spoken out against the company’s use of a “content scaling agent,” an AI tool powered by Anthropic’s Claude, to repackage reporters’ stories for specific audiences, while retaining their byline. 

Unions for McClatchy papers—the Miami Herald, Sacramento Bee, Kansas City Star, and Idaho Statesman—as well as the Washington State News Guild, which represents several newsrooms, have filed grievances against the company, alleging that the CSA tool, as it’s known, violates contract provisions that require advance notice for major technological changes. McClatchy’s unionized journalists are organized not under a single guild but several regional ones, per company policy—which means that how and whether the use of this tool is disclosed in articles published by McClatchy depends on whether the newsroom has a union, and what its contract provisions are. Articles produced using the CSA tool at the Centre Daily Times—a McClatchy paper in Pennsylvania, which is not unionized—read “Reporting by (reporter’s name). Produced with AI assistance.” At the Sacramento Bee, which ratified a union contract that included AI provisions in February, reporters are withholding their bylines from CSA stories. (Instead, the byline reads “Edited by (editor’s name), story produced with AI assistance,” and “The Sacramento Bee staff, summary produced with AI assistance.”)

“We don’t want it to be done in our name, literally,” Ariane Lange, an investigative reporter at the Bee and the vice chair of its union, told me. “We don’t want the public to think that we sign off on this, because we do not.” Lange continued: “I’ve covered traffic deaths in the city of Sacramento since 2024, and I have talked to many families of people who have been killed in crashes, and that’s a very vulnerable moment. I’m assuring them they can trust me, but I also have to explain that my employer might feed their story to a chatbot and spit it back out as five key takeaways. That’s revolting to me.”

According to The Wrap’s Corbin Bolies, McClatchy executives have indicated that they would like more control over the labeling of reporters’ work, and Kathy Vetter, McClatchy’s chief of staff for local news, said that where a union contract doesn’t prohibit using a reporter’s byline, the company will do so for AI-generated content. Multiple McClatchy reporters who participated in a virtual training last week to help employees use the CSA tool told me that, during the session, Vetter said, “It’s your blood, sweat, and tears in there, and to let AI have credit hurts my heart.” (Vetter and McClatchy did not respond to requests for comment.)

At the Idaho Statesman, where the CSA tool was recently introduced, Bryan Clark, an opinion writer and the secretary of the Idaho News Guild, said that reporters are concerned that if they don’t agree to put their byline on AI-generated stories, they’ll fall behind in page views, which newsroom leadership tracks. “There may be some useful ways to use this tool that we’re not opposed to,” Clark told me. “But it’s not what the company is attempting to do right now.”

At the New York Times, the guild is fighting for a number of AI protections, including a share of licensing revenue, the right to remove a byline if AI was used in a piece without a reporter’s knowledge, and disclosure of AI use. Isaac Aronow, an associate editor on the games team who is on the guild’s bargaining committee and is a cochair of the Times’ AI subcommittee, said that in the most recent bargaining session, on April 27, management struck down or altered the majority of these proposals, including one stipulating that any use of AI must have human oversight.  

Sign up for CJR’s daily email

“They have said at the table that they care a lot about these same issues that we care about, but ultimately, they have treated our position of putting these protections in the contract with scorn and disdain,” Aronow told me. Following the publication of a book review that was revealed to have been plagiarized with the help of AI, the guild sent management a letter: “At present, the Times’ standards on AI use are woefully inadequate.” The letter, which the Times Guild shared with me, reads, “We are told to use AI ‘ethically,’ but given little guidance on what exactly that means.”

“We favor safeguards and have publicly available industry-leading protections to ensure AI tools are used ethically and with transparency, while leaving flexibility to iterate as the technology evolves,” Danielle Rhoades Ha, a Times spokesperson, told me in a statement. Sources with direct knowledge said that Times leadership proposed provisions to ensure that the company wouldn’t create digital replicas of reporters’ images, voices, or likenesses without their consent and addressed concerns about the use of AI in performance reviews. Though the company’s editorial principles prohibit use of generative AI in the newsroom without human guidance and review, the company is reluctant to include this provision in a contract because, in managers’ view, evolving technologies require flexibility.  

Tyson Evans, ProPublica’s chief product and brand officer, told me that newsrooms across the country, including ProPublica and the Times—where he used to work—are all navigating this moment of AI uncertainty together. At ProPublica, “we just don’t see the contract as a useful vehicle to make any explicit guarantees,” Evans said. “It’s just too soon to know exactly where these tools will end up. So the nature of being able to commit to things in a multiyear contract, I think, rightly gives us and other organizations pause.” Evans told me that ProPublica has promised its union that the company won’t use AI technologies to create digital replicas of employees’ work.

Some newsrooms, as large as CBS and as small as Vermont’s VT Digger, have had success in ratifying contracts with AI guardrails in the past month. But for many journalists, the battle is far from over. ProPublica authorized five days of a strike, four of which are still left if they can’t come to an agreement with management. “Now is the time to fight this fight,” Hilke Schellmann, an AI expert and journalism professor at New York University, told me. “Because once it becomes standard in a union bargaining agreement across the journalism industry that journalists or employees have no say over AI tools, that quickly becomes a standard.”

Has America ever needed a media defender more than now? Help us by joining CJR today.

Riddhi Setty is a Delacorte fellow at CJR.

More from CJR