Join us
The Media Today

Brutta Figura

On Trump, the pope, and AI.

May 12, 2025
Pope Leo XIV delivers his first official address to the College of Cardinals following his election at the Vatican on May 10, 2025. Photo by (EV) Vatican Media/Abaca/Sipa USA(Sipa via AP Images)

Sign up for The Media Today, CJR’s daily newsletter.

All this week, CJR is running a series of pieces, on our website and in this newsletter, about how AI is transforming the news media ecosystem. First up this morning, Mike Ananny and Matt Pearce speak with leaders across the industry—including Semafor’s Gina Chua, The Atlantic’s Nicholas Thompson, and Zach Seward, of the New York Times—to learn how they’re using the technology, and where they draw the line. You can read the report here.

Habemus Trumpam. Last weekend, with the world’s eyes fixed on the Vatican following the death of Pope Francis and ahead of the selection of a successor, the president of the United States inserted himself, inevitably, into the conversation by posting an apparently AI-generated image to Truth Social depicting him in papal robes, one finger solemnly raised. I say inserted himself, but it seems unlikely that Trump actually posted the image himself. (I must note, again, that this is a man who has been known to consume social media posts in printed form.) Asked at the White House about the controversy that the episode generated, Trump disclaimed any involvement, but defended the image anyway; “the Catholics loved it,” he said. (Many Catholics did not love it; Timothy Dolan, an American cardinal Trump has said would make a good pope, responded with the Italian phrase “brutta figura,” suggesting that the post left a bad impression.) One Catholic who did love the image, Trump said, was his wife, Melania, who found it “cute”: “‘Haha,’ she said, ‘isn’t that nice.’ I would not be able to be married, though. That would be a lot. To the best of my knowledge, popes aren’t big on getting married, are they? Not that we know of, no.” He then suggested that the whole controversy had been ginned up by “the fake news media,” adding, “they’re fakers.”

The controversy echoed Trumpian outrages of years past: there was the (factually plausible, if morally beside the point) denial of any involvement with a meme posted in his name for the consumption of the terminally online, and the projection of calling the reality-based media “fake” when he was the one perpetrating literal fakery; the White House account on X shared the image, using an official tool of government to deify Trump and own the libs in a manner that has gotten attention recently but was already in evidence in Trump’s first term, as I noted at the time. (The reporter who asked Trump about the pope image last week also put it to him that its official amplification might “diminish the substance” of the White House account. “Give me a break,” Trump scoffed. “You have to have a little fun, don’t you?”) The surreality of the episode also blurred the ever-thinning lines between what is a joke and what is serious, itself a long-standing Trump-era dynamic. (Trump told reporters, in the flesh, that he’d be his own favored candidate to succeed Francis, leading the Republican senator Lindsey Graham to urge the conclave to keep an open mind about the idea. “This is probably meant to be a joke,” Esquire’s Dave Holmes concluded, but “you can’t be tongue-in-cheek when you are actively licking the boot. There is just not enough tongue for both jobs.”)

The tools available for this sort of performance, however, have evolved since Trump last held office; in the interim, of course, generative AI has moved forward in leaps and bounds. Trump began amplifying apparently AI-generated memes during that interim, showing him kneeling in prayer around the time that he was indicted in the New York hush-money case, for example, or touting a purported endorsement from Taylor Swift. (Swift later endorsed Kamala Harris and cited the fake endorsement as part of the reason why. “It really conjured up my fears around AI, and the dangers of spreading misinformation,” she wrote, and “brought me to the conclusion that I need to be very transparent about my actual plans.”) Since Trump’s return to office, this content strategy has taken an even darker turn: he posted a repulsive AI video to Truth Social showing Gaza being turned into a beach resort (“No more tunnels, no more fear,” an accompanying soundtrack proclaimed, “‘TRUMP GAZA’ is finally here”), while the White House X account posted an image of a crying migrant, who had just been detained and was previously convicted of fentanyl trafficking, in the style of Studio Ghibli, capitalizing on a viral trend. The pope meme wasn’t even Trump’s only foray into AI last week: to mark Star Wars Day on May 4 (as in, May the Fourth be with you), the White House account posted an extraordinarily tacky image depicting Trump with bulging muscles and a lightsaber. (The saber was red, suggesting, as Luke Skywalker and other pundits pointed out, that Trump is one of the bad guys; my sense was that, as ever, the troll was the point.) Some observers have expressed concern about this trend. “Throughout his political career, Trump has embraced bold visuals,” Reuters reported, citing experts, but “unlike those rooted in reality, AI images blur fact and fiction in ways that can mislead.”

This is a fair concern, and Trump’s embrace of AI images could easily escalate; as one expert told Reuters, you can imagine what might happen were Trump to start posting more “photorealistic” images placing him in historical scenes that didn’t actually happen. Already, advances in generative AI can have the effect of undermining our faith in information more generally by making us question it, even when it’s authentic. (I noted last year that one particular photo of Trump campaigning at a McDonald’s drive-through was “perhaps the most AI-generated a real image has ever looked.”) But Trump has never needed AI to blur fact and fiction in misleading ways. (Indeed, the broader McDonald’s stunt might have been the apogee of this dynamic—an event that was in every meaningful sense fake, even if it physically did happen.) And at least for now, the AI content that he is amplifying hasn’t really been misleading; it has, mostly, been obviously, cartoonishly fake. Following the pope and Star Wars posts last week, 404 Media’s Matthew Gault observed, in an astute piece, that “grotesque AI slop” is the perfect aesthetic for Trump’s second presidency. “All political movements are accompanied by artists who translate the politics into pictures, writing, and music,” Gault wrote. Trump and his allies’ embrace of crude AI “is not concerned with convincing anyone or using art to inform people about its movement. It seeks only to upset people who aren’t on board and excite the faithful because it upsets people.”

This is not to say that such material is not still dangerous and consequential: as Gault noted, it’s such a fitting aesthetic for Trump because it mirrors his “brute force attack on American democracy”; the Ghibli migrant meme, in particular, was an expression of absolute state power over a real person. Gault listed other pieces of AI-generated or -modified content that administration accounts put up last weekend, including Obama-style “HOPE” posters depicting migrants, including the wrongfully deported Kilmar Ábrego García; these were probably more consequential than the pope and Star Wars memes, even if they got far less attention. Certainly deserving of more attention is how the new administration is using its concrete power to shape the development and use of AI as a transformational technology. Certain stories of this nature—not least Elon Musk and DOGE’s reported use of AI to cut and, allegedly, surveil the federal workforce—have gotten a lot of attention. Others, perhaps, less so, including how Big Tech’s fawning approach to the new administration is changing who has the power to generate what using AI tools, and the fact that DOGE, for all its high-tech pretensions, actually may have gutted the government’s AI expertise through its slapdash approach to mass layoffs. Over the weekend, Trump took the unusual step of firing the director of the US Copyright Office. The details are murky, but Democrats and some staffers have suggested that her ouster may be linked to a recent report that appeared to question AI engines’ need to be trained on reams of copyrighted material, and Musk’s contrary interests in this area. (He seemed recently to endorse doing away with all intellectual-property laws.)

Of course, AI poses global questions that are much bigger than Trump. (The implications of AI engines training themselves on copyrighted material and what, if anything, the law should do about this are being felt by news organizations, among other institutions and industries, on multiple continents.) Other world leaders are seeking to shape the wider conversation about how AI will be used—not least at the Vatican, which has warned about the dangers of worshipping the technology; the Rome-based tech journalist Isobel Cockerell recently described its stance as “the old religion coming out to do battle with the new one,” as I noted in this newsletter. Over the weekend, Robert Francis Prevost—the American who ended up being picked as Francis’s successor as pope last week—mentioned AI’s challenges to “the defense of human dignity, justice, and labor” in a debut address. He even suggested that AI guided his choice of papal name, Leo XIV, citing the example of his predecessor Leo XIII, who spoke out about the Industrial Revolution around the turn of the twentieth century.

Sign up for CJR’s daily email

Cockerell also noted that the Catholic Church “has always known how to harness technology and spectacle to inspire faith in a higher power”; the papacy, in particular, is a highly visual symbol. It’s thus no surprise that the pope has proven to be catnip for AI-generated memes; the one shared by Trump may have been more high-profile than most given who he is, but as content goes, it was hardly unique, or even that interesting. It might not even have been the most high-profile example: in 2023, an AI-generated image of Pope Francis in a Balenciaga puffer coat went mega-viral, and tricked many people (definitely not including me, nope) into thinking it was real. The tech writer Ryan Broderick argued at the time that the image might have been “the first real mass-level AI misinformation case,” and theorized that the reason “it’s fooling so many (myself initially included) is that the pope aesthetically exists in the same uncanny valley as most AI art.” Now “that everyone has been duped by the pope in a coat it’s a good time to acknowledge that there’s no way our government can regulate AI fast enough for it to matter,” he added. “Welcome to the first day of the rest of your life. Images won’t ever feel real again!”

Two years on, though, I’m not sure images feel less real to me than they did back then; Trump’s recent posting certainly hasn’t done much to throw me into a state of existential doubt. And the creation of unreal images needn’t be a harmful thing; it can be fun, too, much like Francis in Balenciaga was. Last week, the conclave that gathered to select the new pope inspired a flood of memes, AI-generated and not, because it was a huge shared global news event and, perhaps, because it unfolded in secret, and we collectively needed content to fill the void. “Trump’s AI image of himself as the pope was condemned by leading Catholics,” Vogue’s Raven Smith noted, “but the idea of a gaggle of secretive, red-robed men deliberating and scheming in an escape room is catnip for our imaginations.” Meanwhile, researchers put AI to work trying to divine who the conclave might pick. Science wrote about one such model, which ended up totally missing Prevost for lack of adequate real-world data—on which, of course, AI ultimately depends. “To us it has been a fun and stimulating exercise,” the researchers involved in the effort wrote. “We share hoping that fellow nerds will find it interesting.”


Other notable stories:

  • Last week, administrators at Columbia University and Barnard College, an affiliated institution, suspended four student journalists who covered a pro-Palestinian protest in Columbia’s main library. The suspensions—and associated threats including the loss of college housing—were subsequently lifted; Isha Banerjee has more for the Columbia Daily Spectator. Elsewhere, a federal judge ordered the Trump administration to release RĂźmeysa ÖztĂźrk, a Turkish student at Tufts University who was snatched off the street by immigration agents earlier this year, seemingly over an op-ed criticizing Israel that she coauthored in a student paper. (“There is no evidence here…absent consideration of the op-ed,” the judge ruled.) On Saturday, ÖztĂźrk addressed reporters and professed faith in the US justice system. (She could still be deported.)
  • An employment tribunal in the UK ruled that Saima Mohsin, a former international correspondent for CNN, can sue the network over allegations of disability discrimination and unfair dismissal. Mohsin, who now works as an anchor for the British channel Sky News, was injured on assignment in Israel in 2014 after a cameraman ran over her foot, leading her to suffer chronic physical pain and mental-health issues. She alleges that CNN fired her in 2017 after she asked for support; the network claimed that the UK lacks jurisdiction over her terms of employment, but the tribunal rejected that argument. Deadline has more details.
  • A pair of media-business updates from Semafor’s weekly media newsletter: G. Elliott Morris, who led the polling and data site FiveThirtyEight before its owner, ABC, shuttered it recently, is launching Strength in Numbers, a data-focused political Substack. (Morris was critical of ABC in an interview with Semafor, suggesting that its devotion “to not pissing people off” hamstrung his work.) Meanwhile, two other high-profile journalists—Craig Silverman and Alexios Mantzarlis—are launching Indicator, a publication on Beehiiv devoted to covering digital deception.    
  • And The Bulwark’s Will Sommer reports on how a journalist at the right-wing Daily Caller News Foundation came to be fired (just days after capturing Democratic representative Ilhan Omar telling him to “fuck off” on camera), apparently for his role in organizing a failed party to mark Trump’s first hundred days back in office. “Instead of hosting GOP royalty, the event generated two police reports from ticket buyers,” Sommer reports. “One man who was asked to help set up the event compared it to the disastrous Fyre Festival, and at least one embarrassing video of a man breakdancing with a puppet ricocheted across the Internet.” (Organizers denied it was that bad.)

Check out more coverage from our AI issue and our campaign in collaboration with TBWA\Chiat\Day here.

Has America ever needed a media defender more than now? Help us by joining CJR today.

Jon Allsop is a freelance journalist whose work has appeared in the New York Review of Books, The New Yorker, and The Atlantic, among other outlets. He writes CJR’s newsletter The Media Today. Find him on Twitter @Jon_Allsop.