Sign up for the daily CJR newsletter.
Nicole Blanchard, an investigative reporter at the Idaho Statesman, is also identified, internally, as an AI champion. “It’s kind of formal, in that it’s a role,” she told me, “but it isn’t so formal that I have certain responsibilities.” The title was bestowed by McClatchy, the Statesman’s parent company; AI champions dot the country, across the roughly thirty newsrooms under its ownership. Essentially, as Blanchard sees it, the appointment is for people “willing to try different AI technology or be aware of different AI technology, and report back, and encourage people to check it out. Or be upbeat about it to the newsroom.” Blanchard has a genuine interest, she told me, and has been active in trying out digital tools: for a recent piece on a podcast called The Ranch—“a guy in a garage,” as she put it, interviewing Idaho’s governor, lieutenant governor, and “others who refuse to give interviews to legacy media”—she used an AI assist to compile a list of all the guests. “I was stoked,” she said. Still, “I’m a terrible champion,” she wanted me to know. “I have not found a whole lot that’s super useful.”
Blanchard is also a steward in her union, which is part of the Pacific Northwest Newspaper Guild, as are several other McClatchy newsroom unions in Oregon and Washington. Since mid-2025, the Pacific Northwest Newspaper Guild has been in negotiations with McClatchy about the use of AI in news production. No one at the table has pushed to ban AI completely, but the company’s rollout of AI—alongside staff cuts, including the elimination, in November, of McClatchy’s national news team—has taken some of its journalists by surprise. In one case, at the Tacoma News Tribune, Kristine Sherred—the paper’s food reporter, and the cochair of the Washington State NewsGuild—told me, colleagues noticed that “a very fun little goofy story” about a dog in a motorcycle sidecar had been replicated and repackaged, with entire paragraphs subtly rewritten. In the original version, a local reader would understand that Buddy, the golden retriever, was a resident of Gig Harbor; the AI version, still on the website, served to make the piece more viable as clickbait for the greater internet.
“It was a test,” Sherred said. “But it was a test that was happening with our work, and we weren’t informed of it whatsoever before it was published on the same website that we publish, you know, human-reported and -edited work.” During a staff meeting, reporters voiced a “weird question,” she recalled, that boiled down to “Why? You know, why would you do that?” Sherred said that, when pressed, newsroom senior staff responded to the effect of “Yeah, we’re doing this stuff.”
Other examples piled up: This past summer, Sherred and her colleagues noticed that items were being rearranged on the Tacoma News Tribune homepage, thanks to AI-fueled testing. “If our own work is being buried by random stuff that was pulled on there through an AI tool,” Sherred told me, “we were concerned about that.” Then, she said, McClatchy started rolling out more of the “summary-type AI-generated stories where they’re essentially using our work to feed something else.” AI-created listicles were cropping up, drawing from McClatchy-journalist-created work à la the golden retriever situation—but sometimes getting details slightly wrong. Other McClatchy reporters spotted curious “AI-assisted reporter” listings—including for the Miami Herald, another McClatchy paper, which the Miami New Times, an alt-weekly, flagged in an article headlined “The Miami Herald Is Hiring an AI-Assisted Reporter and We Have Questions.”
A McClatchy employee, speaking anonymously for fear of jeopardizing their position, told me that the company has been encouraging staff to use AI tools managed by Elvex, a company with the mission of “meaningful, company-wide AI adoption.” A case study on Elvex’s website, focused on McClatchy, touted that “there has been a palpable mindset shift” and that “employees now approach problems by asking, ‘Could that be done with AI?’” ( The anonymous employee characterized Elvex’s tools as McClatchy’s version of an in-house ChatGPT, and described using it to help format story pitches.) Another McClatchy journalist mentioned the use of Nota, a generative “vision agent.” Blanchard told me, “The company has been moving really quickly on AI and put it to the forefront. We’ve all really wondered how we can make sure that our voices are being heard.”
These pain points have come up in bargaining talks. Bryan Clark, an opinion writer at the Idaho Statesman and a member of the Idaho NewsGuild, told me that after some back-and-forth, the union got the company to agree to a few provisions: that, for instance, “generative AI shall not be employed to perform newsgathering—activities such as interviewing sources and submitting public records requests,” according to the contract-in-progress. But McClatchy has so far refused to agree not to publish deepfakes or to put a codified AI ethics policy into the contract. The company has also declined to set a policy about issuing corrections to erroneous AI-created work. (Neither McClatchy nor the lawyer who worked on this contract for the company responded to my requests for comment.)
“We know that generative AI is here to stay, and we’re not even asking McClatchy to write it off completely, because we know that that won’t happen,” Karlee Van De Venter, a union steward and a service journalist at the Tri-City Herald, in Washington State, told me. “Some of us would really like to ask for that, but we’re realistic and we know that that’s not it. What we’re asking for is possible. It’s agreeable. It’s within McClatchy’s power. They just won’t. They don’t want to be held to the standard. They’d rather embrace the flawed emerging technology than support its staff and support the tenets of journalism that staff is trying to uphold.” Van De Venter grew up in Yakima and has lived on the east side of the Cascades their entire life. “Our day-to-day is supposed to be difficult because what we do is difficult,” they said. Introducing this idea of ‘What’s going to make your job easier?’ is just so counterintuitive to what journalism is to me.”
Notably, employees at McClatchy papers cannot bargain as a united coalition, even though “that would make sense for us,” Van De Venter said, since at the corporate level, “they make policies that impact the entire company.” Those in the Pacific Northwest have kept a watchful eye on the Miami Herald, whose union finished bargaining before its own contract was up for renegotiation. In Miami, AI protections allow use with anything from transcription and spell-check to data analysis, summarization, and display. “The stories you see on McClatchy homepages are chosen by local editors with help from an AI algorithm,” the Herald site discloses. “It’s not necessarily terrible,” Clark said, of the AI policy. “But they can change it anytime they like.”
Certainly, as any AI champion will tell you, setting the terms can be complicated: even in conversation around deepfakes, Clark said, bargainers have been worried that they might inadvertently make it harder to present simulations of flooding in a floodplain, or crime scene reconstruction. “What we’re actually trying to ban is images and videos produced by AI that a reasonable viewer might confuse for real video, real audio,” he told me.
Bargaining continues. In the latest round, McClatchy returned the draft contract with the following sentence crossed out: “Any content that is created by Generative AI shall only be done at the direction of and with the editorial review of human beings, and no such content shall be published without such involvement.” Suggested was the word substantially, between is and created. As Clark told me, “Until we have solid language here, we’re not going to feel safe about the integrity of our jobs—or the language our readers read.”
Has America ever needed a media defender more than now? Help us by joining CJR today.