Art by Darrel Frost

Writing the AI Rulebook

The pursuit of collective commitment, with journalism’s future at stake 

October 16, 2023

The astounding evolution of artificial intelligence—from sci-fi plot point to ubiquitous consumer product in less than a year—has been matched by frantic befuddlement over what to do about it. The last similarly transformative technology, the internet, took decades to fully overthrow the media’s old order, long enough for endless course corrections to deal with its advance. In the case of AI, journalists won’t have nearly as much time for chin-stroking. Our industry must coalesce around a set of standards soon, or it will be too late. The AI bosses will already be in charge. 

The urgency poses some problems. Journalism is a tradition-bound field that prefers to evolve generationally; its sense of editorial independence tends to make higher-ups resistant to collective action. Publications led by the media’s less scrupulous employers are likely to follow the overwhelming economic logic to maximize the amount of “content” generated by AI, minimize the extent of human labor, and reap the benefits of lowered production costs. There is a limit to that, of course—perhaps a team of ten writers and three editors can become a team of one writer and two editors who look over AI content before publishing it—but wherever the line falls, the losers will be journalists (who are unemployed), readers (who are fed less actual reporting, which algorithms still cannot do), and civil society (as politicians and powerful institutions face less coverage and as misinformation encounters less competition in the media marketplace). The task of building a firewall around AI’s unregulated spread through the media will require a union-led effort that rallies a broad coalition around a shared conviction: that journalism must always be by and for human beings. 

I take this view not as a labor partisan—I’m an elected council member of the Writers Guild of America, East—but as a realist. Already, unions have found themselves up against AI-produced-content fiascoes. In January, sharp-eyed readers of CNET noticed that the site had begun publishing AI-generated explainers carrying the byline “CNET Money Staff.” An uproar ensued. Two weeks later, more than half of those stories had corrections appended. In May, CNET’s editorial staff announced that they were unionizing with the Writers Guild, a campaign motivated in part by their dismay at how AI was being used. The company’s error-ridden AI rollout “injured the reputations of CNET journalists and the media site as a whole,” the union told me, “but management has seemed oblivious to the harm already caused and has gone on to tout the positive attributes of artificial intelligence in internal promotions.” (Asked for comment, a CNET spokesperson sent a link to the publication’s AI policy, which states that “every piece of content we publish is factual and original, whether it’s created by a human alone or assisted by our in-house AI engine.”)

An equally ham-fisted implementation of AI took place at G/O Media in July, when, with less than a day’s notice to journalists, management started publishing AI-written content that achieved a spectacular level of uselessness: a chronological listicle of Star Wars movies and shows, for example. James Whitbrook, the deputy editor of Gizmodo and io9, where the piece appeared, drew up an eighteen-point objection note pointing out inaccuracies. He called the piece “shoddily written,” “embarrassing,” and “dogshit.” (When reached for comment, Jim Spanfeller, the CEO of G/O Media, said in a statement that “it would be irresponsible and a complete lack of our fiduciary responsibility to not explore the use of this revolutionary technology” but that “we do not see AI as a replacement for editorial created by journalists.” Around the same time, G/O Media laid off the editors of Gizmodo en Español and began using AI to translate Gizmodo articles into Spanish.)

The dangers of handing newsroom managers sole power over the use of AI are continuing to unfold elsewhere. Google is pitching an AI tool that writes stories for the upper crust of the industry—the Washington Post, Wall Street Journal, and New York Times, all of which are eyeing the possibilities of AI-supported content. A Washington Post spokesperson said, “We are in the process of putting together our internal guidelines, and they will be coming soon.” A spokesperson for the Journal pointed to remarks by Robert Thomson, the CEO of News Corp, the paper’s corporate parent, about how AI companies should pay for the material they use to train algorithms—a potential new revenue stream for news outlets, if not one that would ensure journalists keep their jobs. By the end of August, CNN found that at least a dozen major media companies, including the Post and the Times, were blocking AI models from accessing their articles, an opening salvo in what will surely be a war over compensation. The Times declined to comment on its other AI plans. The staff of all three newsrooms are unionized with NewsGuild; each will have a crack at negotiating AI rules in upcoming contracts.

 

The duty to regulate AI that has so far been thrust upon labor unions has played out visibly in Hollywood—and as hard as that fight was, the challenges surrounding AI in journalism, where no union is dominant enough to set standards for the entire industry, appear even harder. The management of CNET is forging ahead with its use of AI; as the union prepares for contract negotiations, organizers have made that a key point of concern. The union is seeking “editorial discretion to not use AI should it fail to meet high publishing standards,” they told me, as well as “the right to opt out of using AI, clarity about what data sources have been used to train our AI, ongoing collaboration and input into how AI is being used on the site, and assurance that AI won’t be used to modify content after employees leave CNET.” At G/O Media, union members have expressed their discontent to management: “AI tools should only be used to augment the writing and reporting of humans, not to replace them,” the committee told me in a statement. But their contract does not expire until 2025. In the meantime, the Writers Guild has condemned G/O Media’s use of AI-generated articles and petitioned outlets to promise they won’t swap out employees for AI tools; the union has also asked employers across the board to engage with workers on AI rules, even if they’re not in the midst of contract bargaining. Spanfeller acknowledged that the union views AI “as an existential threat to their existence.” He added, “We do not see it this way but we can certainly understand that new things can be scary.” 

Even as union contracts aim to build guardrails at individual companies, however, truly preventing AI from permanently altering our industry for the worse demands a wider effort—one that takes seriously the threat posed by AI to journalistic ethics. At conferences and panel discussions, and on internal committees, union organizers have joined with reporters, editors, and journalism scholars in pursuit of answers. The way that executives have inelegantly rushed into AI can provoke an initial panic in newsrooms: Ban it! Generally, though, that response has given way to more nuanced conversations about what legitimate use cases for AI could be: Searching troves of FOIA-ed documents for relevant material? Generating lists of possible sources for unfamiliar topics? Copyediting? Illustrations? Instantaneous A/B testing of headlines? The closest I have heard to a consensus is that AI could be a wonderful way to help reporters do their jobs—but that its role needs to be carefully negotiated, its deployment guided by an ethical code that has yet to be written. The good news is that the tasks of creating ethical rules for AI in journalism and the power struggle in which unions are engaged to prevent its abuse can be combined into one. 

Here is a starting point: publishing “journalism” written by AI should be deemed unethical, even if it has been looked over by an editor prior to publication. Not because AI cannot produce a convincing simulacrum of a news story, but because AI lacks the accountability of a human journalist—and it can never be accountable, no matter how refined its algorithms get. An editor cannot have a difficult conversation with AI about why it made certain choices in a story; AI will always be presenting the mere appearance of transparency, never a true exploration of its decisions and motivations. It has no soul, it has no mind. If AI is neither accountable nor transparent, its work can never be ethically published as journalism. 

Like the internet, AI can aid journalists in their work. But it can never replace people. Not for lack of a technological leap, but because of a qualitative difference between human beings and AI that will never change. There is little doubt that profit-driven media company managers will be happy to overlook the ethical landmine. It is up to all of us—everyone who understands that journalism is a human conversation rather than a product—to unite around standards, before we’ve lost our chance. 

Hamilton Nolan is a CJR contributor who also writes regularly for The Guardian, In These Times, and Defector. He is writing a book about the American labor movement.