The Media Today

AI, the media, and the lessons of the past

July 20, 2023
Source: Dall-E. Prompt: "old school robot drinking coffee with its coworkers in a newsroom"

On Tuesday, the American Journalism Project, a nonprofit organization that aims to revitalize local media in the US, announced a ten-million-dollar partnership with OpenAI, the company that owns and operates the artificial-intelligence tool ChatGPT. Under the terms of the arrangement, OpenAI will give the AJP five million dollars in cash, which the organization plans to disperse in grants enabling ten of the local media outlets with which it has partnered (out of a total of forty-one) to experiment with AI. Sarabeth Berman, the CEO of the AJP, told Axios that the funds will also go toward an in-house product studio that will help the project’s partners and share any lessons learned with other media outlets. OpenAI will hand over the remaining five million dollars in credits that local news publishers will be able to use to pay for ChatGPT and other tools (for which OpenAI charges on a per-use basis).

In a statement released before the OpenAI announcement was made, Berman said that the AJP believes it is essential that AI “is used as a tool for journalists, not as a replacement,” and that the idea behind the partnership is to “improve workflows so that editorial staff can spend more time on hard-hitting reporting and the stories that matter most.” She said that AI might also be able to help newsrooms sort through complex databases, or allow product teams to personalize content. For his part, Sam Altman, the CEO of OpenAI, said that he was “proud to back the American Journalism Project’s mission to fortify our democracy by rebuilding the local news sector.” The AJP partnership was announced less than a week after OpenAI announced a two-year deal with the Associated Press, in which the AP agreed to license some of its archive of content, dating back to 1985, to help train OpenAI’s algorithms in exchange for access to OpenAI’s tools and expertise (although the full details of the arrangement are still unclear).

Under the terms of the AP deal, OpenAI will get a license to use its content as fodder to train ChatGPT, which is what experts call a “large language model,” meaning that its “intelligence” (to the extent that it has any) is based on ingesting and understanding massive quantities of content and the relationships between words  in order to answer questions posed by users. As The Verge noted the best part of a decade ago, the AP was one of the first major news organizations to use automated technology in its news reports, mostly for corporate earnings reports and coverage of local sports; earlier this year, it launched an AI-enabled search tool to allow its member newsrooms and other clients to find photos and videos using natural descriptive language. Until now, however, the AP has not used AI to generate full stories.  

Kristin Heitmann, AP’s vice president and chief revenue officer, said in a statement that the service wants to ensure that newsrooms can “leverage” AI technology, but that it also intends to “ensure intellectual property is protected and content creators are fairly compensated” for their work. Indeed, one reason that OpenAI is striking such deals now could be that it is the target of several lawsuits claiming that its AI engine has breached copyright in the process of ingesting massive amounts of content. (Google’s Bard AI engine has been the target of similar suits). Recently, more than eight thousand authors of fiction, non-fiction, and poetry signed an open letter criticizing OpenAI, Bard, and Meta’s LLaMa engine for using their writing without permission or compensation. The letter said that these technologies “mimic and regurgitate our language, stories, style, and ideas.”

In their respective announcements, the AJP and the AP both stressed that their deals with OpenAI were aimed at finding ways to adapt to recent advances in AI technology, rather than be steamrolled by them. But the bottom line, at least in the case of the AP deal, is that OpenAI will gain access to more content that it can feed into ChatGPT and its other engines, which in turn will allow those tools to create more convincing and authoritative content—some of which could well then compete with the output of newsrooms including the AP’s members. 

In many ways, these kinds of deals echo similar “partnerships” and funding arrangements that media companies negotiated, in the not-so-distant past, with Facebook (now known as Meta) and Google. (Indeed, Facebook has in the past provided funding for the AJP.) Those deals also involved grants and training materials that promised to help media companies “take advantage of” new technologies. The Facebook Journalism Project and the Google News Initiative each committed more than two hundred and fifty million dollars to journalism and media entities over a period of three years. But in the case of Meta, at least, most of those funding arrangements have been wound down or discontinued in recent months. When Axios asked Berman whether she was worried about AI companies one day pulling the plug on funding for news companies, she agreed that this is “totally a possibility,” but argued that OpenAI (at least for now), seems to be operating in a different way to Meta and Google, which committed themselves to spending millions to finance journalism in part to forestall regulation, as I argued in 2018.

Sign up for CJR's daily email

But the model might not be so different after all. As with counterparts at Meta and Google, Altman, of OpenAI, has had to appear before Congress to answer questions about the implications of his company’s technology. And the Federal Trade Commission recently opened an investigation into the company, probing whether it has put privacy and personal data at risk through its data-harvesting processes. As Axios noted, Altman tried hard, during his recent Congressional appearance, to cast himself as the responsible face of AI, but significant skepticism remains—not only about OpenAI, but about the benefits of the technology in general, and in particular its impact on the media industry. Meredith Kopit Levien, the CEO of the New York Times, told Axios last month, for example, that “you cannot put bots on the front lines of Bakhmut in Ukraine to tell you what is happening there and to help you make sense of it.”

Such skepticism hasn’t stopped other media outlets from adopting AI tools already. Some have developed, and in some cases published, clear guidelines for how they would use the technology. Others have played fast and loose, at least at first. In January, CNET, which is owned by a private equity firm called Red Ventures, came under fire for using AI tools to generate news content without telling anyone. Many of the resulting stories contained inaccuracies and also appeared to have plagiarized other sources. Following a controversy, the site said that it would pause using AI tools; in June, it clarified how it will (and won’t) use the tools in the future, promising that stories would not be written entirely by AI, and that hands-on reviews and testing of products would be conducted by humans. CNET also said that it would not publish images and videos using AI “as of now,” but would explore using AI tools to sort and analyze data and to create outlines.

More recently, G/O Media—which owns sites including Gizmodo, The Onion, and Jezebel, and is also owned by a private-equity firm—published a number of stories generated by AI engines, apparently without any input from staff editors or writers. According to Peter Kafka, of Vox, the articles contained multiple errors—a list aimed at putting the Star Wars movies in chronological order got the chronology wrong—and, partly as a result, “infuriated G/O staff and generated scorn in media circles.” Despite this reception, G/O executives told Kafka that the AI-produced stories were only one step in a much larger experiment with the technology, with bosses telling staff, in an internal memo, that the company plans to create more of them soon. Merrill Brown, the editorial director of G/O Media, told Kafka that it is “absolutely a thing we want to do more of.” Brown and Jim Spanfeller, G/O Media’s CEO, argued that AI will be transformative for the media industry, and that ignoring it would be a terrible mistake.

Kafka added that Spanfeller told him he wants to use AI to “automate some tasks humans currently perform on the business side.” Spanfeller and Brown insisted they won’t use AI to replace G/O’s staff. (“Our goal is to hire more journalists,” Spanfeller said, while admitting that G/O laid off employees recently because of what he called a “crappy economic market.”) But that argument hasn’t persuaded G/O staff, it seems. “This is a not-so-veiled attempt to replace real journalism with machine-generated content,” one G/O journalist told Kafka, adding that the company “values quantity over quality.” Other media companies have already warned staff that AI might be coming for certain jobs: in June, The Guardian reported that Bild—a German tabloid owned by Axel Springer, which also owns Politico and Insider in the US—told employees that “the opportunities of artificial intelligence” could lead to future cuts. As Max Read, a former editor at Gawker, wrote recently, “any story you hear about using AI is [fundamentally] a story about labor automation.” 

The bottom line is that cutting deals with OpenAI—or Google’s Bard, or Meta’s LLaMa, for that matter—raises a host of potential concerns that in many ways are similar to those raised by past deals with Google and Meta. Any assistance provided to these companies could ultimately help put journalists out of business, and the risk remains that, once the media’s utility to the world of AI has been exhausted, the funding tap will quickly be turned off. Media executives can argue that having a seat at the table is better than not having one, but it might just make it easier for big tech to eat their lunch.

 

Other notable stories:

  • Writing for The Guardian, Hamilton Nolan argues that the current writers’ and actors’ strikes roiling Hollywood matter for every American—because the strikers are on the frontlines of fights over inequality and the regulation of AI. “Do not make the mistake of seeing these strikes as something remote from the realities of your own life,” Nolan writes. “Hollywood has many flaws, but its most redeeming quality is that it is a strongly unionized industry.” Elsewhere, Andrew Leahey argues, for Bloomberg Tax, that the strikes should encourage states to reconsider their practice of offering tax incentives to big studios and switch to directly subsidizing arts instead, pointing to the New Deal-era Federal Writers’ Project as precedent. (CJR’s Jon Allsop profiled the FWP in 2020.)
  • Yesterday, Marc Tessier-Lavigne resigned as president of Stanford University—ending a process that was set in motion when the Stanford Daily, a student newspaper, reported on significant apparent flaws in past research that Tessier-Lavigne oversaw. His resignation followed the firing, last week, of Pat Fitzgerald, the top football coach at Northwestern, which was also triggered by reporting in a student paper—on that occasion, stories about abusive hazing rituals that appeared in the Daily Northwestern. The two scandals, Katie Robertson writes for the Times, “highlighted the important role of college newspapers in holding to account the powerful institutions that house them.”
  • Darryl Holliday announced that he is stepping down from City Bureau, a pioneering civic journalism nonprofit based in Chicago, eight years after co-founding it. “Helping this organization grow from a promising idea to a proven local news model has been a dream job,” Holliday writes, adding that City Bureau will “keep breaking new ground as a model for participatory local media, and an unapologetic force for inclusive, multiracial democracy.” Holliday plans to “continue to devote myself to the local news revival from a new vantage.” (He has also written about his work for CJR, here and here.)
  • Speculation continues to swirl as to the whereabouts of Qin Gang, China’s foreign minister, who hasn’t been seen in public in nearly a month; Chinese officials at one point cited “health reasons,” but have since gone quiet on the subject. Rumors now abound about an alleged extramarital affair that Qin may have had with a US-based journalist, though Foreign Policy’s James Palmer writes that the evidence for this is circumstantial, and that even if it happened, an affair would seem unlikely to doom Qin politically. 
  • And in the UK, the governing Conservative Party hit out at the Evening Standard, a newspaper in London, after it splashed an, erm, interesting air-punch photo of Susan Hall, the party’s newly endowed London mayoral candidate, on its front page. The Conservatives accused the paper of “clear mockery” and “misogyny,” adding that its photographer “heavily encouraged” Hall’s pose despite her “expressing reluctance.”

Correction: A previous version of this post implied that OpenAI’s deal with the AJP included a license to use its content to train the ChatGPT AI engine, but that is not the case.

ICYMI: A new documentary tells the story of India’s news crisis

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.