Sign up for The Media Today, CJRâs daily newsletter.
Last spring, the press panicked. âRepublicans slam Biden re-election bid in AI-generated ad,â Axios reported. âRepublicans counter Biden announcement with dystopian, AI-aided video,â per the Washington Post. Recently, the AP offered an analysis: âPresident Joe Bidenâs campaign and Democratic candidates are in a fevered race with Republicans over who can best exploit the potential of artificial intelligence, a technology that could transform American electionsâand perhaps threaten democracy itself.â Elsewhere, more stories have piled up about the terrifying prospect of AI-generated deepfakesâincluding, notably, a piece in the Financial Times describing a video that circulated in Bangladesh, made using HeyGen, a tool that can produce news-style clips with AI-generated avatars for as little as twenty-four dollars a month. âPolicymakers around the world are worrying over how AI-generated disinformation can be harnessed to try to mislead voters and inflame divisions,â the story went.
In much of the coverage, thereâs an undercurrent of fearâimplied or expressed outrightâthat AI-generated deepfakes and hoaxes are (or soon will be) incredibly realistic and utterly convincing. But that may be mere techno-panic: even if ubiquitous and inexpensive AI tools have made it easier than ever to create misinformation, itâs unclear whether AI is making much of a difference in politics. (In Bangladesh, Prime Minister Sheikh Hasina and her party were reelected by an overwhelming majority.) For the press, focusing on the technical power of AI may be a mistake.
Carl Millerâthe research director at the Centre for the Analysis of Social Media at Demos, a political think tank based in the United Kingdomâtold me that, for the most part, there hasnât been an explosion of AI fakes trying to change peopleâs views. Many of us have âa fairly naive idea about how influence operations actually work,â he said. People may imagine that bad actors will spread âconvincing yet untrue images about the world to get them to change their minds.â In reality, influence operations are designed to âagree with peopleâs worldviews, flatter them, confirm them, and then try to harness that.â
Thatâs why, according to RenĂ©e DiResta of the Stanford Internet Observatory, the most common type of AI-generated âchatbotâ or fake account on X is what is known as a âreply guyââa persona with no real thoughts or opinions of its own that simply shows up to echo a post. AI chatbots can sometimes create a âmajority illusion,â DiResta explained, giving the impression that a certain view is more common than it really is. Through what she calls the âmechanics of influence,â modern social media becomes a blend of the old broadcast model and personal gossip networks, combining the reach of the former with the interpersonal connection of the latter.
That means that how realistic a deepfake might beâthe presumed value proposition of AIâisnât critical to what makes it convincing. (In fact, how sleek a deepfake looks may undermine its credibility; as Henry Ajder, an expert on synthetic media and AI, told The Atlantic, itâs âfar more effective to use a cruder form of media manipulation, which can be done quickly and by less sophisticated actors.â) More important: who disinformation comes from, how it makes people feel, and whether that plays into their existing beliefs. Influence of this kind, Miller said, is not about truth or facts, but kinship. As he told me, âItâs going to talk to them about meaning in their lives, where they fit in the world. Itâs going to confirm the grievances they have. Itâs all to do with identity and emotion and social links.â
Meta has taken action in the past against large networks of fake accounts, at least some of which appeared to come from China and Russia. AI could make it faster and easier to create chatbot networks. But a more powerful form of influence, Miller believes, will be largely unseenâand therefore difficult to moderateâbecause it will take place in private groups and one-on-one conversations. âMaybe you’d recruit friendships on Facebook groups, but you could easily move that to direct chat, whether on WhatsApp or Instagram or Signal,â he said.
Some argue that AI may actually help in the fight against disinformation: Yann LeCun, a leading thinker in AI and a chief scientist at Meta, made that argument in Wired recently; five years ago, he said, about a quarter of all hate speech and disinformation that Facebook removed was identified by AI, and last year it was closer to 95 percent. Miller is not as confident, however, that âany kind of automated model we can deploy would reliably spot either generated imagery or text.â For now, platforms have instituted AI-transparency rules: Meta recently introduced âMade with AIâ tags; YouTube requires that creators disclose when posting ârealisticâ material made with altered or synthetic media. Given the sheer quantity of material that gets uploaded, though, those policies could be difficult to enforceâand seemingly impossible for one-to-one influence operations.
Perhaps the greatest riskâmore than convincing AI-created deepfakes, or AI âfriendsââis what some call âthe liarâs dividend,â through which politicians (or any bad actors) benefit by claiming that something is a deepfake even if they know it isnât, gambling on the publicâs general mistrust of online content. The fear that âeveryoneâs going to spend the next year having their worldview warped and destroyed by this avalanche of fake imagery,â Miller said, isnât necessarily merited. More fundamental than the technological problem is a human one: that our minds will collapse into our beliefs; that weâll share messages more because theyâre funny or rage-inducing than because theyâre true; that our trust in almost any source of information will disappear.
Has America ever needed a media defender more than now? Help us by joining CJR today.