Tow Center

Op-Ed: AI can’t anthropomorphize its way to empathy

By attempting to imbue AI systems with human-like emotions, AI companies are being deceptive about the realities of the technology

May 28, 2024
Image: downing.amanda, CC BY 2.0, via Wikimedia Commons

Sign up for The Media Today, CJR’s daily newsletter.

Two weeks ago, OpenAI announced its latest generative AI model. Headlines extolled the chatbot’s ability to talk, laugh, and sing like a human. For many, this release harked back to the 2013 science fiction movie Her, where a human falls in love with his advanced AI voice assistant. In fact, after tech outlets reported that the new ChatGPT model’s voice was strikingly similar to that of Scarlett Johansson, who voiced the AI in the movie, Johansson herself hired a lawyer who asked OpenAI to cease using the “eerily similar” voice. While the company claims it didn’t copy Johansson, it complied and paused the ability to use the new voice.  

As a tech policy expert, millennial, and sci-fi buff, I was reminded, by OpenAI’s release, of another classic singing robot. Two days before OpenAI demonstrated its new AI, news broke that Chuck E. Cheese would be retiring its animatronic band. Then last week, after public outcry and an outpouring of nostalgia, the band’s retirement was postponed

While Chuck E.’s band and ChatGPT4o may appear strikingly different, both are just computer programs, created by humans, designed to mimic human behavior. Their drastic distinctions, however, demonstrate how AI developers are increasingly driving their systems to move beyond creepy humanlike impersonations and to develop real human qualities like empathy. But as OpenAI unleashes yet another wave of advanced technology on society, it would be good to keep the memories of Chuck E. and friends alive.

Research published this month from Cornell Tech tested how various current generative AI models responded when prompted to display empathy. After prompts from sixty-five distinct human identities, the researchers found, the systems made “value judgments” about a person’s identity, such as their being gay or Muslim. Researchers also found that the AI systems would encourage, and appear empathetic without condemnation toward, ideologies like Nazism. They concluded that AI “can be prone to the same biases as the humans from which the information comes.” As I’ve written before, the same conclusion about AI systems replicating the bias of their creators has been reached by researchers time and time again. 

Despite this reconfirmed hypothesis, and acknowledging the limits of the technology, the Cornell Tech researchers also stated that “it’s extremely unlikely that it (automated empathy) won’t happen.” They said that the impetus behind pushing AI systems to develop empathy is their tremendous potential in crucial and lucrative areas like healthcare and education. A quick Google search for “AI empathy company” reveals countless technology startups and millions of dollars invested to crack the code of training current generative AI models to develop human emotions through various methods. These include a new partnership between industry behemoths LG and Google to develop an empathetic service robot. 

I’m not so convinced. I think again about how Chuck E. Cheese and his friends were programmed to sing, dance, and act in a manner that mimicked human behavior. Yet, outside of the suspended reality of a child, no one was expected to believe that Chuck E. Cheese’s animatronic band were in fact real animals, with real talent, jamming in a real band. Neither were the machines ever expected to learn to become real musicians, lawyers, or teachers, nor to take on the responsibility of administering healthcare or customer service. I also dare to hope that if Chuck E. or his friends showed signs of Nazi sympathizing at work, they would be decommissioned and out of a job. 

Yet our expectations for generative AI are wildly different. I believe that by continuing to anthropomorphize AI systems as human or having humanlike emotions, AI companies are being deceptive about the realities of the technology. They are also removing the accountability for and acknowledgment of the flaws and limitations embedded by human design into the systems.

Though I am critical, I do think that generative AI provides useful services. After initially refusing to use the tools, I decided to try them out recently when a friend recommended I seek their help while writing a proposal. I was more than a little amazed at the technology’s ability to effectively cut character counts and quickly develop a summarized title for a set of ideas. In fact, I found the tools so beneficial that I have begun recommending their use to friends and family for limited purposes. 

But now my recommendations will come with a new caveat: Would you be so willing to trust ChatGPT if, rather than flirting, it blinked at you, waiting with anticipation for your prompts like Chuck E. behind the microphone?

Anika Collier Navaroli is a senior fellow at the Tow Center for Digital Journalism at Columbia University and a Public Voices Fellow on Technology in the Public Interest with the Oped Project. She previously held senior policy official positions at Twitter and Twitch. In 2022, she blew the whistle about her warnings to Twitter that went unheeded leading to the January 6 attack on the Capitol and the platform’s ultimate decision to suspend former president Donald Trump.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »