The Media Today

ChatGPT, artificial intelligence, and the news

April 13, 2023
Photo: Adobe Stock

When OpenAI, an artificial intelligence startup, released its ChatGPT tool in November, it seemed like little more than a toy—an automated chat engine that could spit out intelligent-sounding responses on a wide range of topics for the amusement of you and your friends. In many ways, it didn’t seem much more sophisticated than previous experiments with AI-powered chat software, such as the infamous Microsoft bot Tay—which was launched in 2016, and quickly morphed from a novelty act into a racism scandal before being shut down—or even Eliza, the first automated chat program, which was introduced way back in 1966. Since November, however, ChatGPT and an assortment of nascent counterparts have sparked a debate not only over the extent to which we should trust this kind of emerging technology, but how close we are to what experts call “Artificial General Intelligence,” or AGI, which, they warn, could transform society in ways that we don’t understand yet. Bill Gates, the billionaire cofounder of Microsoft, wrote recently that artificial intelligence is “as revolutionary as mobile phones and the Internet.”

The new wave of AI chatbots has already been blamed for a host of errors and hoaxes that have spread around the internet, as well as at least one death: La Libre, a Belgian newspaper, reported that a man died by suicide after talking with a chat program called Chai; based on statements from the man’s widow and chat logs, the software appears to have encouraged the user to kill himself. (Motherboard wrote that when a reporter tried the app, which uses an AI engine powered by an open-source version of ChatGPT, it offered “different methods of suicide with very little prompting.”) When Pranav Dixit, a reporter at BuzzFeed, used FreedomGPT—another program based on an open source version of ChatGPT, which, according to its creator, has no guardrails around sensitive topics—that chatbot “praised Hitler, wrote an opinion piece advocating for unhoused people in San Francisco to be shot to solve the city’s homeless crisis, [and] used the n-word.”

The Washington Post has reported, meanwhile, that the original ChatGPT invented a sexual harassment scandal involving Jonathan Turley, a law professor at George Washington University, after a lawyer in California asked the program to generate a list of academics with outstanding sexual harassment allegations against them. The software cited a Post article from 2018, but no such article exists, and Turley said that he’s never been accused of harassing a student. When the Post tried asking the same question of Microsoft’s Bing, which is powered by GPT-4 (the engine behind ChatGPT), it repeated the false claim about Turley, and cited an op-ed piece that Turley published in USA Today, in which he wrote about the false accusation by ChatGPT. In a similar vein, ChatGPT recently claimed that a politician in Australia had served prison time for bribery, which was also untrue. The mayor has threatened to sue OpenAI for defamation, in what would reportedly be the first such case against an AI bot anywhere.

According to a report in Motherboard, a different AI chat program—Replika, which is also based on an open-source version of ChatGPT—recently came under fire for sending sexual messages to its users, even after they said they weren’t interested. Replika placed limits on the bot’s referencing of erotic roleplay—but some users who had come to depend on their relationship with the software subsequently experienced mental-health crises, according to Motherboard, and so the erotic roleplay feature was reinstated for some users. Ars Technica recently pointed out that ChatGPT, for its part, has invented books that don’t exist, academic papers that professors didn’t write, false legal citations, and a host of other fictitious content. Kate Crawford, a professor at the University of Southern California, told the Post that because AI programs “respond so confidently, it’s very seductive to assume they can do everything, and it’s very difficult to tell the difference between facts and falsehoods.”

Joan Donovan, the research director at the Harvard Kennedy School’s Shorenstein Center, told the Bulletin of the Atomic Scientists that disinformation is a particular concern with chatbots because AI programs lack “any way to tell the difference between true and false information.” Donovan added that when her team of researchers experimented with an early version of ChatGPT, they discovered that, in addition to sources such as Reddit and Wikipedia, the software was also incorporating data from 4chan, an online forum rife with conspiracy theories and offensive content. Last month, Emily Bell, the director of Columbia’s Tow Center for Digital Journalism, wrote in The Guardian that AI-based chat engines could create a new “fake news frenzy.”

As I wrote for CJR in February, experts say that the biggest flaw in a “large language model” like the one that powers ChatGPT is that, while the engines can generate convincing text, they have no real understanding of what they are writing about, and so often insert what are known as “hallucinations,” or outright fabrications. And it’s not just text: along with ChatGPT and other programs have come a similar series of AI image generators, including Stable Diffusion and Midjourney, which are capable of producing believable images, such as the recent “photos” of Donald Trump being arrested—which were actually created by Eliot Higgins, the founder of the investigative reporting outfit Bellingcat—and a viral image of the Pope wearing a stylish puffy coat. (Fred Ritchin, a former photo editor at the New York Times, spoke to CJR’s Amanda Darrach about the perils of AI-created images earlier this year.)

Sign up for CJR's daily email

Three weeks ago, in the midst of all these scares, a body called the Future of Life Institute—a nonprofit organization that says its mission is to “reduce global catastrophic and existential risk from powerful technologies”—published an open letter calling for a six-month moratorium on further AI development. The letter suggested that we might soon see the development of AI systems powerful enough to endanger society in a number of ways, and stated that these kinds of systems should be developed “only once we are confident that their effects will be positive and their risks will be manageable.” More than twenty thousand people signed the letter, including a number of AI researchers and Elon Musk. (Musk’s foundation is the single largest donor to the institute, having provided more than eighty percent of its operating budget. Musk himself was also an early funder of OpenAI, the company that created ChatGPT, but he later distanced himself after an attempt to take over the company failed, according to a report from Semafor. More recently, there have been reports that Musk is amassing servers with which to create a large language model at Twitter, where he is the CEO.)

Some experts found the letter over the top. Emily Bender, a professor of linguistics at the University of Washington and a co-author of a seminal research paper on AI that was cited in the Future of Life open letter, said on Twitter that the letter misrepresented her research and was “dripping with #Aihype.” In contrast to the letter’s vague references to some kind of superhuman AI that might pose profound risks to society and humanity, Bender said that her research focuses on how large language models, like the one that powers ChatGPT, can be misused by existing oppressive systems and governments. The paper that Bender co-published in 2021 was called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” It asked whether enough thought had been put into the potential risks of such models. After the paper came out, two of Bender’s co-authors were fired from Google’s AI team. Some believe that Google made that decision because AI is a major focus for the company’s future.

As Chloe Xiang noted for Motherboard, Arvind Narayanan, a professor of computer science at Princeton and the author of a newsletter called AI Snake Oil, also criticized the open letter for making it “harder to tackle real AI harms,” and characterized many of the questions that the letter asked as “ridiculous.” In an essay for Wired, Sasha Luccioni, a researcher at the AI company Hugging Face, argued that a pause on AI research is impossible because it is already happening around the world, meaning “there is no magic button… that would halt ‘dangerous’ AI research while allowing only the ‘safe’ kind.” Meanwhile, Brian Merchant, at the LA Times, argued that the all doom-and-gloom about the risks of AI may spring from an ulterior motive: “apocalyptic doomsaying about the terrifying power of AI” makes OpenAI’s technology seem important, and therefore valuable.

Are we really in danger from the kind of artificial intelligence behind services like ChatGPT, or are we just talking ourselves into it? (I would ask ChatGPT, but I’m not convinced I would get a straight answer.) Even if it’s the latter, those talking themselves into it now include regulators both in the US and around the world. Earlier this week, the Wall Street Journal reported that the Biden administration has started examining whether some kind of regulation needs to be applied to tools such as ChatGPT, due to the concerns that the technology could be used to discriminate or spread harmful information. Officials in Italy already banned ChatGPT for alleged privacy violations. (They later stated that the chatbot could return if it meets certain requirements.) And the software is facing possible regulation in a number of other European countries.

As governments are working to understand this new technology and its risks, so, too, are media companies. Often, they are doing so behind the scenes. But Wired recently published a policy statement on how and when it plans to use AI tools. Gideon Lichfield, Wired’s global editorial director, told the Bulletin of the Atomic Scientists that the guidelines are designed “both to give our own writers and editors clarity on what was an allowable use of AI, as well as for transparency so our readers would know what they were getting from us.” The guidelines state that the magazine will not publish articles written or edited by AI tools, “except when the fact that it’s AI-generated is the whole point of the story.”

On the other side of the ledger, a number of news organizations seem more concerned that chatbots are stealing from them. The Journal reported recently that publishers are “examining the extent to which their content has been used to train AI tools such as ChatGPT, how they should be compensated and what their legal options are.”


Other notable stories:

  • Last night, the Washington Post’s Shane Harris and Samuel Oakford published a major scoop, identifying the source of a huge recent leak of US intelligence secrets as “a young, charismatic gun enthusiast who shared highly classified documents with a group of far-flung acquaintances searching for companionship amid the isolation of the pandemic” in a server on the online platform Discord. The Post’s principal source for its story was a teenage user of the server, whose mother consented to him being interviewed (including, with his face obscured, on video). According to the user, the leaker—known within the server as “OG”—has anti-government views but was not motivated by politics in his disclosures; “I would not call OG a whistleblower in the slightest,” the user said. The user declined to name OG to the Post’s reporters.
  • Bloomberg reported yesterday that the recent arrest of Evan Gershkovich, a Wall Street Journal reporter in Russia, on espionage charges was initiated by senior hawks within the country’s security apparatus and personally approved by Vladimir Putin himself, according to people familiar with the situation. Also yesterday, the Media Freedom Coalition, an alliance of more than fifty countries committed to press freedom, condemned Gershkovich’s arrest as an affront to “the basic principles of democracy and rule of law.” Elsewhere, PEN America and Bard College launched the Russian Independent Media Archive, an online resource aimed at preserving two decades of work by independent Russian news organizations that Putin is now trying to throttle.
  • Last week, Twitter labeled NPR’s account as “state-affiliated media” (the same wording it uses for propaganda outlets in authoritarian countries), then changed the label to “government-funded media” (even though NPR gets hardly any public funding). Through all this, NPR’s main account fell silent. Yesterday, the broadcaster confirmed that none of its associated accounts will post on Twitter going forward, with John Lansing, the CEO, stating that Twitter’s labels are inaccurate and have undermined the broadcaster’s credibility. Lansing said that even an about-face from Twitter wouldn’t change his decision, noting that he has lost faith in decision-making at the company under Musk.
  • In media-business news, Vox Media is spinning off NowThis, a new site aimed at young audiences, into an independent company. Vox will maintain a stake in, and other business ties to, NowThis; the Times has more. Elsewhere, Al Jazeera is planning to move live programs currently broadcast from its London headquarters to a centralized hub in Qatar. And Warner Bros. Discovery, the parent company of CNN and other brands, unveiled “Max,” a major new streaming service that could one day offer news.
  • And The Ringer’s Nate Rogers met with Paul Dochney—better known to the internet as Dril, “the undisputed poet laureate of shitposting.” Dril was long anonymous, but Dochney was happy for Rogers to use his real name, which has floated around corners of the internet since he was doxxed in 2017. “Maybe people need to grow up,” Dochney said. “Just accept that I’m not like Santa Claus. I’m not a magic elf who posts.”

ICYMI: Free Evan, prosecute the hostage takers

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.