Join us
Art by Daniel Zender

Mindless Reply

In the realm of political disinformation, AI-generated deepfakes are not such a big problem. Our susceptibility to gossip is.

June 10, 2024

Sign up for The Media Today, CJR’s daily newsletter.

Last spring, the press panicked. “Republicans slam Biden re-election bid in AI-generated ad,” Axios reported. “Republicans counter Biden announcement with dystopian, AI-aided video,” per the Washington Post. Recently, the AP offered an analysis: “President Joe Biden’s campaign and Democratic candidates are in a fevered race with Republicans over who can best exploit the potential of artificial intelligence, a technology that could transform American elections—and perhaps threaten democracy itself.” Elsewhere, more stories have piled up about the terrifying prospect of AI-generated deepfakes—including, notably, a piece in the Financial Times describing a video that circulated in Bangladesh, made using HeyGen, a tool that can produce news-style clips with AI-generated avatars for as little as twenty-four dollars a month. “Policymakers around the world are worrying over how AI-generated disinformation can be harnessed to try to mislead voters and inflame divisions,” the story went.

In much of the coverage, there’s an undercurrent of fear—implied or expressed outright—that AI-generated deepfakes and hoaxes are (or soon will be) incredibly realistic and utterly convincing. But that may be mere techno-panic: even if ubiquitous and inexpensive AI tools have made it easier than ever to create misinformation, it’s unclear whether AI is making much of a difference in politics. (In Bangladesh, Prime Minister Sheikh Hasina and her party were reelected by an overwhelming majority.) For the press, focusing on the technical power of AI may be a mistake.

Carl Miller—the research director at the Centre for the Analysis of Social Media at Demos, a political think tank based in the United Kingdom—told me that, for the most part, there hasn’t been an explosion of AI fakes trying to change people’s views. Many of us have “a fairly naive idea about how influence operations actually work,” he said. People may imagine that bad actors will spread “convincing yet untrue images about the world to get them to change their minds.” In reality, influence operations are designed to “agree with people’s worldviews, flatter them, confirm them, and then try to harness that.”

That’s why, according to RenĂ©e DiResta of the Stanford Internet Observatory, the most common type of AI-generated “chatbot” or fake account on X is what is known as a “reply guy”—a persona with no real thoughts or opinions of its own that simply shows up to echo a post. AI chatbots can sometimes create a “majority illusion,” DiResta explained, giving the impression that a certain view is more common than it really is. Through what she calls the “mechanics of influence,” modern social media becomes a blend of the old broadcast model and personal gossip networks, combining the reach of the former with the interpersonal connection of the latter.

That means that how realistic a deepfake might be—the presumed value proposition of AI—isn’t critical to what makes it convincing. (In fact, how sleek a deepfake looks may undermine its credibility; as Henry Ajder, an expert on synthetic media and AI, told The Atlantic, it’s “far more effective to use a cruder form of media manipulation, which can be done quickly and by less sophisticated actors.”) More important: who disinformation comes from, how it makes people feel, and whether that plays into their existing beliefs. Influence of this kind, Miller said, is not about truth or facts, but kinship. As he told me, “It’s going to talk to them about meaning in their lives, where they fit in the world. It’s going to confirm the grievances they have. It’s all to do with identity and emotion and social links.”

Meta has taken action in the past against large networks of fake accounts, at least some of which appeared to come from China and Russia. AI could make it faster and easier to create chatbot networks. But a more powerful form of influence, Miller believes, will be largely unseen—and therefore difficult to moderate—because it will take place in private groups and one-on-one conversations. “Maybe you’d recruit friendships on Facebook groups, but you could easily move that to direct chat, whether on WhatsApp or Instagram or Signal,” he said. 

Some argue that AI may actually help in the fight against disinformation: Yann LeCun, a leading thinker in AI and a chief scientist at Meta, made that argument in Wired recently; five years ago, he said, about a quarter of all hate speech and disinformation that Facebook removed was identified by AI, and last year it was closer to 95 percent. Miller is not as confident, however, that “any kind of automated model we can deploy would reliably spot either generated imagery or text.” For now, platforms have instituted AI-transparency rules: Meta recently introduced â€œMade with AI” tags; YouTube requires that creators disclose when posting “realistic” material made with altered or synthetic media. Given the sheer quantity of material that gets uploaded, though, those policies could be difficult to enforce—and seemingly impossible for one-to-one influence operations.

Perhaps the greatest risk—more than convincing AI-created deepfakes, or AI “friends”—is      what some call “the liar’s dividend,” through which politicians (or any bad actors) benefit by claiming that something is a deepfake even if they know it isn’t, gambling on the public’s general mistrust of online content. The fear that “everyone’s going to spend the next year having their worldview warped and destroyed by this avalanche of fake imagery,” Miller said, isn’t necessarily merited. More fundamental than the technological problem is a human one: that our minds will collapse into our beliefs; that we’ll share messages more because they’re funny or rage-inducing than because they’re true; that our trust in almost any source of information will disappear.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.