The Media Today

Taylor Swift deepfakes could be the tip of an AI-generated iceberg

February 1, 2024
LONDON, UNITED KINGDOM - NOVEMBER 26: Taylor Swift attends The Winter Whites Gala In Aid Of Centrepoint at Kensington Palace on November 26, 2013 in London, England. (Photo by Samir Hussein/WireImage)

Last week, fake pornographic images of singer Taylor Swift started spreading across X (formerly known as Twitter). Swift fans quickly swarmed the platform, calling out the images as fakes generated by AI software and demanding that X remove them and block the accounts sharing them. According to a number of reports, the platform removed some of the images and their associated accounts, but not before certain photos had been viewed by millions of users. (According to a report from The Verge, some have been seen more than forty-five million times.) After the images continued to circulate across the service even after the bans were implemented, X blocked the term “Taylor Swift” from its search engine, so that searches produced an error saying that “something went wrong.” However, despite the block, reporters for The Verge showed that it was still relatively easy to find the fake images—by misspelling Taylor Swift slightly or putting her name in quotes.

X’s inability to stop the proliferation of Swift porn may have been caused in part by Elon Musk’s dismantling of the company’s trust and safety team, most of whom were fired after he acquired Twitter, in 2022. In the wake of the Swift controversy, Joe Benarroch, head of business operations at X, told Bloomberg that the company was planning a new “trust and safety center of excellence” in Texas to help enforce its content moderation rules, in particular a ban on child sexual content (which is also a popular form of AI-generated imagery), and that X intends to hire a hundred new full-time moderators. Bloomberg noted that the announcement came just days before executives from X and the other major social platforms and services were set to appear before the Senate Judiciary Committee for a hearing on child safety online, which took place Wednesday.

On Monday, X restored the ability to search for Taylor Swift but said in a statement that it would “continue to be vigilant” in removing similar AI-generated nonconsensual images. The White House even weighed in on the controversy: Karine Jean-Pierre, the White House press secretary, told ABC News that the Biden administration was “alarmed by the reports,” and that while social media companies are entitled to make their own content decisions, the White House believes it has a role in preventing “the spread of misinformation, and nonconsensual, intimate imagery of real people.”

According to 404 Media, the Taylor Swift images were generated by Designer, an AI text-to-image tool that is owned by Microsoft, and were then traded on 4chan, an online community that takes pride in its failure to follow normal rules of behavior, as well as through a private channel on Telegram, an encrypted-chat app based in Dubai. On Monday, Microsoft announced that it has “introduced more protections” in its software to make generating such images more difficult. However, 404 Media noted that the Telegram channel where the images appeared is still sharing AI-generated images of real people produced with other tools, and that it is quite easy to download an AI software model from a site called Civitai and run it on a home PC to generate pornographic imagery of celebrities.

In order to get Designer and other tools to generate such photos, 404 Media explained that all users have to do is describe sexual acts without using sexual terms, instead referring to positions, objects, and composition. Other AI-powered engines, many of which are available online for free, offer to take publicly available photos of celebrities (or anyone, for that matter) and generate nudes by removing their clothing. 404 Media also noted that since it started writing about deepfakes, in 2017, when a fake sex video of actress Gal Gadot circulated on social media, Taylor Swift has been a prime target for people using the technology to generate nonconsensual pornography, and was one of the first celebrities targeted by DeepNude, an app that generated nudes of women but was later taken down.

In her briefing about the Swift images, Jean-Pierre said the White House believes that Congress “should take legislative action” to prevent nonconsensual pornography created by AI. Joe Morelle, a Democratic New York congressman, is trying to do just that: he used the Swift controversy to promote a bill called the Preventing Deepfakes of Intimate Images Act, which would criminalize the nonconsensual sharing of digitally altered images. Under the law, anyone sharing deepfake pornography without consent would face damages of up to one hundred and fifty thousand dollars and ten years in prison. Morelle introduced the law in December 2022, but it failed to pass that year or in 2023; he reintroduced it after gaining some support during a House subcommittee hearing on deepfakes last November.

Sign up for CJR's daily email

Nora Benvenidez, senior counsel to Free Press, noted on Twitter that some of these proposed anti-deepfake laws would likely fail a First Amendment challenge because they could penalize “a wide array of legitimate speech,” including political commentary and satire, and in some cases would breach the First Amendment rights of the platforms to moderate content. A report from the Center for News Technology and Innovation found that laws targeting the broad category of “fake news” have increased significantly over the past few years, particularly in the wake of COVID-19, and that while most are technically aimed at curbing disinformation, the majority of these laws would have the effect of lessening the protection of an independent press and weakening public access to information.

Pornographic deepfakes of celebrities may be the most popular category of AI-generated content, but political content is not far behind. Some voters in New Hampshire recently received an AI-generated robocall imitating President Joe Biden, telling them not to vote in the state’s primary election. According to Wired, it’s not clear who created the robo-fake, but two separate teams of audio experts told the magazine that it was likely created using technology from ElevenLabs, a startup that offers voice duplication. The company markets its tools to video game and audiobook creators, and Wired reports that it is valued at more than a billion dollars. The company’s safety policy says cloning someone’s voice without permission is acceptable when it’s for “political speech contributing to public debates.”

In some cases, political campaigns are using artificial intelligence to generate their own deepfake content: OpenAI, which created ChatGPT, the popular AI text engine, recently banned a developer who created a “bot” that mimicked the conversational style of Dean Phillips, a Democratic presidential candidate. The bot was created by an AI startup called Delphi, in response to a request from a couple of Silicon Valley entrepreneurs who supported Phillips’s run for president. Although the bot came with a disclaimer saying it was powered by AI, and users had to agree to use it under those terms, OpenAI’s terms of service ban the use of ChatGPT in connection with a political campaign.

Brandy Zadrozny, a reporter for NBC News, wrote recently that disinformation poses an unprecedented threat in 2024, and that the US is “less ready than ever.” Claire Wardle, codirector of Brown University’s Information Futures Lab, which studies misinformation and elections, said that despite the similarities to the 2020 election given the candidates and parties involved, the current situation feels very different because of a combination of the pandemic, the attack on Congress on January 6, and what Wardle called “a hardening of belief” that the election was stolen. Zadrozny argues that while research shows that disinformation has little immediate effect on voting choices, it can affect how people make up their minds about issues and “provide false evidence for claims with conclusions that threaten democracy.” A World Economic Forum survey named misinformation and disinformation from AI as the top global risk over the next two years, ahead of climate change and war. 

On top of the AI technology involved, researchers and other experts are concerned about a lack of transparency and cooperation among academics when it comes to such issues because of a campaign by certain members of Congress accusing the government, tech platforms, and researchers of colluding to censor right-wing content under the guise of fighting disinformation (something I wrote about for CJR last year). According to Zadrozny, some researchers say these campaigns, which have included threats of lawsuits and other actions, have had a “chilling effect” on new research. And that’s on top of the cutbacks that many platforms have made to their disinformation teams. So there may be more AI-powered disinformation on the horizon, but those fighting it may be even less prepared. 


Other notable stories:

  • Swift has also been in the news this week as the subject of feverish conspiracy theories emanating from the political right and its media: that she is a “Pentagon asset” and an electoral “psyop” who is somehow being manipulated by liberals to fix the upcoming presidential election, in league with her boyfriend, Travis Kelce, the Kansas City Chiefs football player whose team made the Super Bowl this past weekend. Kyle Chayka, who writes about internet culture for The New Yorker, explored why all online roads seem to lead to Swift at the moment. “Algorithmic feeds act like enormous funnels, siphoning users toward an increasingly narrow set of subjects,” Chayka writes. “The combination of Taylor Swift plus Travis Kelce, feminine pop music plus male athletic contests, creates an all-consuming content vortex, a four-quadrant supernova of fame.”
  • Today, staffers at seven newspapers owned by Alden Global Capital, a financial firm notorious for cuts at its titles, will go on strike in protest of the company’s remuneration policies; the Chicago Tribune will be among the papers affected, as will the Orlando Sentinel and the Virginian-Pilot. The walkouts are the latest in a string of strikes at major news organizations this year amid a brutal period of newsroom layoffs. (Staffers at the New York Daily News, another Alden property, already walked off the job last week.) Unionized staffers at various titles owned by G/O Media, including The Onion and Deadspin, were also set to walk out today after voting to authorize a strike—though the action was averted after the union struck a tentative contract deal with management.
  • For CJR, Bill Shaner—a journalist based in Worcester, Massachusetts, who writes the newsletter Worcester Sucks & I Love It—makes the case for the importance of local columnists, many of whom have lost their jobs amid the broader recent cuts to local news across the US. “When we think of national columnists, we think of breathless and endless takes—a bloated and exhausting corps of self-declared experts perpetuating tribal groupthink,” he writes. “But the opposite is true for local news. Local columnists usually started as reporters. They tend to know everyone, and have seen a large swath of local history firsthand. Maybe even made some. Their voices are informed, earned.”
  • Last weekend, the Boston Globe published a front-page story about Lynda Bluestein, a woman with terminal cancer who, unable to choose death by assisted suicide in Connecticut, her home state, successfully sued for the right to access that choice in Vermont. (She died earlier this month.) The Globe also published an editor’s note revealing that Kevin Cullen, the columnist who wrote the story, signed a form that Bluestein needed under Vermont law—and conceding that, in doing so, he had violated the paper’s standards. Poynter’s Tom Jones weighed the ethical considerations involved.
  • And the government of Mexico said that a hacker stole personal data—including copies of identity documents and home addresses—belonging to more than two hundred and fifty journalists who had supplied the information to the president’s office as part of a vetting procedure for press conferences. The leak “exposes the journalists to potential identity theft and could compromise their physical security,” Reuters reports, in a country that is already among the most dangerous in the world for media professionals.

ICYMI: Shoot The Messenger

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.