Sign up for The Media Today, CJRâs daily newsletter.
With advancements in AI tools being rolled out at breakneck pace, journalists face the task of reporting developments with the appropriate nuance and contextâto audiences who may be encountering this kind of technology for the first time.
But sometimes this coverage has been alarmist. The linguist and social critic Noam Chomsky criticized âhyperbolic headlinesâ in a New York Times op-ed. And there have been a lot of them.
âBingâs A.I. Chat: âI Want to Be Alive. đââ â‘Godfather of AI’ says AI could kill humans and there might be no way to stop it.â âCould ChatGPT write my bookâand feed my kids?â âMeet ChatGPT, the scarily intelligent robot who can do your job better than you.â âMicrosoftâs new ChatGPT AI starts sending âunhingedâ messages to people.ââ âWhat is AI chatbot phenomenon ChatGPT and could it replace humans?â
In order to better understand how ChatGPT is being covered by newsrooms, we interviewed a variety of academics and journalists on how the media has been framing coverage of generative AI chatbots. We also pulled data on the volume of coverage in online news using the Media Cloud database and on TV news using data from the Internet TV News Archive, which we acquired via The GDELT Projectâs API, in order to get a sketch of the coverage so far.
News reporting of new technologies often takes the pattern of a hype cycle, said Felix M. Simon, a doctoral researcher at the Oxford Internet Institute and Tow Center fellow. First, âIt starts with a new technology which leads to all kinds of expectations and promisesâ. ChatGPTâs initial press release promised a chatbot that âinteracts in a conversational wayâ. Next, media coverage branches into two extremes: âWe have people say itâs the nearing apocalypse for industry XYZ or democracy,â or, alternatively, âit promises all kinds of utopias which will be brought about by the technology,â Simon said. Finally, after a few months, a more nuanced period of coverageâaway from catastrophe or utopiaâto discuss real-world impacts. âThatâs when the cycle starts to cool off again.â
But coverage of generative AI chatbots like ChatGPT seems unlikely to be cooling off anytime soon. Â
OpenAI launched ChatGPT to the public on the last day of November 2022, and within just a few days, the site had over a million users. While the media did take notice early, it wasnât until January and February of 2023 that online news coverage really started to pick up. That was around the time that BuzzFeed announced it would be using ChatGPT for content creation, Microsoft integrated a ChatGPT-powered chatbot into its Bing search engine and Google announced its challenger to ChatGPT, Bard.
For Subramaniam Vincent, director of the Journalism and Media Ethics program at the Markkula Center for Applied Ethics at Santa Clara University, one recurring issue with media coverage of this technology is âthat it tends to be led by what the companies say this technology is going to do.â Thatâs a structural problem not tied just to ChatGPT. Moreover, he added, âthe CEOs of these companies go to Twitter and social media and start making their own claims to control the narrative about AI.â
Early 2023 was also roughly when television stations began to air nearly daily stories of the latest developments around chatbots from OpenAI, Google and Microsoft, according to data we pulled from the Internet TV News Archive using The GDELT Projectâs interface.
Business news channels have maintained a steady clip of stories about these companiesâ activities around generative AI chatbots. CNBC is leading the pack in terms of volume of coverage.
Meanwhile, cable news channels had less coverage of the chatbots relative to business news. Among the big three networks, CNN and Fox News have platformed ChatGPT more than MSNBC. Both CNN and Fox News have looked at the impact of generative AI on education, the workplace and jobs. The latter has also raised concerns about political bias. In one case, a host decried ChatGPT as a âwoke superweapon.â Foxâs coverage also frequently mentioned Elon Musk, who has among other comments said that ChatGPT was in danger of becoming “woke” and later urged a six-month hiatus on developing AI tools.
According to a Fox News poll of voters conducted in April, about half say they are either not very or not at all familiar with AI programs like ChatGPT, making accurate news coverage all the more important. And some coverage across TV and online news has been nuanced, seeking to inform audiences about how to navigate the new technology, identify hallucinations, and double check statements the AI produces. Reporters have also delved into issues of algorithmic bias, ethical considerations, the spread of misinformation, and possibilities for regulating misuse. Still other coverage continues to feel like science fiction âpromising everything from the end of work to the destruction of humanityâand translating uncertainty into fear rather than understanding.
A hype cycle?
While it seems as if ChatGPT is ushering in a new era, there are also faint echoes of the coverage of Bitcoin and the promise of cryptocurrencies to change banking and commerce as we know it. Data from Media Cloudâs news database suggests that just six months since launching, ChatGPT is already seeing similar airtime to that given to cryptocurrencies in 2021, when Bitcoin prices peaked, over a decade after it was released to the public in 2009.
Some observers have felt dissatisfied with the media coverage. âAre we in a hype cycle? Absolutely. But is that entirely surprising? No,â said Paris Martineau, a tech reporter at The Information. The structural headwinds buffeting journalismâthe collapse of advertising revenue, shrinking editorial budgets, smaller newsrooms, demand for SEO trafficâhelp explain the âbreathlessâ coverageâand a broader sense of chasing content for web traffic. âThe more you look at it, especially from a birdâs eye view, the more it [high levels of low-quality coverage] is a symptom of the state of the modern publishing and news system that we currently live in,â Martineau said, referring to the sense newsrooms need to be covering every angle, including sensationalist ones, to gain audience attention. In a perfect world all reporters would have the time and resources to write ethically-framed, non-science fiction-like stories on AI. But they do not. âIt is systemic,â she added.
Itâs possible to get a sketch of how the coverage of ChatGPT compares to other new technologies. As the chart above shows, coverage of ChatGPT is already significantly outstripping a range of other hyped technologies like âvirtual realityâ and âdeep fakes,â although coverage of âcryptocurrencyâ is much higher (particularly after the collapse of FTX).Â
Why could it be that the volume of ChatGPT coverage has overtaken that of other new technologies like VR and âdeep fakesâ? âOne thought I have is that because this new tool has direct implications for journalism, that could be one reason why there’s been such an overwhelmingly huge amount of attention in the media,â said Jenna Burrell, director of research at Data & Society. âI would guess that’s part of it.â Another is that ChatGPT and other generative AI tools have greater potential to upend the creative worlds as we know it.
Perhaps there is an argument that unlike cryptocurrencies, chatbots and large language models wield real potential to change society. Already we can see the ways in which ChatGPT is transforming education and being incorporated into the day-to-day workflows of knowledge workers in a large range of sectors.Â
But whatâs concerning for Burrell has been the framing of much of this reporting. âI’ve taken a lot of [media] requests and have felt that there was a need for some clarity about how these technologies work, and a need to fight some of the really outrageous hype,â she said. Thereâs been an anthropomorphic tendency towards âattributing, thinking, knowing, writing, and innovating to this non-human tool,â for instance the story by the New York Times claiming Bingâs chatbot wanted to âbe aliveâ.
One concern with this framing is that the public gets the science-fiction version of the AI storyâlike some of the follow-up coverage of AI pioneer Geoffrey Hintonâs interview on its dangersâand the public ends up being cut out of the important discussions around ethics, usage and the future of work.Â
âItâs the Hollywood-ification of the publicâs understanding of AI,â said Nick Diakopoulos, associate professor in Communication Studies and Computer Science at Northwestern University. We have an image of active robots from movies. âYou would hope that the news coverage wouldn’t simply just bolster that kind of entertaining view of the technology that it would take a little bit more of a critical look.â
Towards better coverage
How could we imagine better media representations of generative AI going forward? For Burrell of Data & Society, who thinks weâre still in the hype phase of the cycle on generative AI chatbots, more sober coverage is needed to cover the issues that matter. One story that seems to have gotten lost is the âincredible consolidation of power and money in the very small set of people who invested in this tool, are building this too, are set to make a ton of money off of it.â We need to move away from focusing on red herrings like AIâs potential âsentienceâ to covering how AI is further concentrating wealth and power.
More sober reporting about what these tools do and how they work is needed to cut through the fog of science fiction. Generative AI tools like ChatGPT, trained on immense amounts of data, are skilled at guessing the next word in a sentence sequence but donât âthinkâ in the ways humans do. âSo it’s literally just walking down the line statistically, looking at the statistical distribution of words that have already been written in the text, and then adding one next word,â Diakopoulos said. More reporting should outline how these technologies actually workâand don’t.Â
That means who gets to train these modelsâand what flaws and biases will be potentially baked inâare questions newsrooms need to be covering. Moreover, editors need to reevaluate whose comments on generative AI are considered newsworthy.
Sensationalized coverage of generative AI âleads us away from more pressing questions,â Simon of the Oxford Internet Institute said. For instance, the potential future dependence of newsrooms on big tech companies for news production, the governance decisions of these companies, the ethics and bias questions relating to models and training, the climate impact of these tools, and so on. âIdeally, we would want a broader public to be thinking about these things as well,â Simon said, not just the engineers building these tools or the âpolicy wonksâ interested in this space.
Newsrooms should lay down ground rules, perhaps in their style guides, to work out coverage strategies moving forward, said Martineau, of The Information, for example: no anthropomorphising chatbots. This parameter-setting âcould help cool the fires of this hype cycle,â she said.