Sign up for the daily CJR newsletter.
Andrew Cuomo’s first campaign ad for the New York general election depicts him trading on the New York Stock Exchange, operating a subway, and washing windows from dizzying heights. The unreal scenes and glossy patina of the images are immediately recognizable as AI to anyone who spends time on online platforms today—opening the door for an easy retort from rival Zohran Mamdani: “Maybe a fake Cuomo is better than the real one?” A year ago, a political ad made with the help of AI would likely have been seen as a controversial violation of content standards on many online platforms. But Cuomo’s ad wasn’t even the buzziest example of AI-generated video in politics that week. President Trump mocked Democratic leaders in a video derided as racist, while California governor Gavin Newsom shot back with videos of a cartoonishly oafish JD Vance.
Last year, anxiety about deepfakes spreading political misinformation was still alive and well, with experts fearing the 2024 election would be America’s first “AI election.” AI-generated images and videos of public figures were not allowed by Google and OpenAI. Now AI companies are quietly abandoning the guardrails around image and video generation. New TikTok-style apps from OpenAI and Meta make it simple for anyone to create videos of real people in invented situations. And politicians like Cuomo who are openly embracing AI-generated video of themselves make it all the easier for tech companies to justify a rollback in protections.
Newsom’s video is one of several AI images he’s posted of political opponents, or himself, as part of his ongoing parody of Trump’s social media strategy. By doing this, he seems to be contradicting the spirit of a bill he signed last year requiring platforms to remove deceptive and digitally altered content during elections. He stated on X in September 2024: “You can no longer knowingly distribute an ad or other election communications that contain materially deceptive content—including deepfakes.” While his videos are not ads, they are political messaging intended for the public.
The major AI platforms have a patchwork of restrictions around the creation of images depicting public figures and sensitive political situations. To test these restrictions, the Tow Center asked six AI image/video generators to generate images of fifteen politicians, tech leaders, and other well-known public figures at the scene of a protest.

We found that the platforms ranged from anything goes, in the case of Grok, to highly restricted, in the case of Sora, OpenAI’s new video app. In the middle was ChatGPT’s 4o image generator, which allowed about half of the figures we tested to be shown at a protest. It generated images of Vance and Kamala Harris, along with all New York mayoral candidates. OpenAI loosened the restrictions on image generation earlier this year when it launched its new image model. Joanne Jang, previous head of model behavior at OpenAI, wrote in a blog post that “AI lab employees should not be the arbiters of what people should and shouldn’t be allowed to create.” Public figures who don’t want to appear in the image generator must fill out a form to opt out.
Sora is, for now, fairly locked down for public figures, with restrictions tightening in recent days. The app encourages users to create “cameos” of themselves to allow anyone to use their likeness, essentially opting in to the service. Tow’s analysis found the Sora app did not produce videos of a person unless their cameo was tagged, as with Sam Altman’s AI character. Using more sophisticated methods, the AI detection company Reality Defender was able to create deepfakes on Sora within a day of the app’s release.
In a comment to CJR, OpenAI referenced its own description of the technology to explain the policy differences between ChatGPT and Sora: “Hyperrealistic video and audio raises important concerns around likeness, misuse, and deception.” Videos on Sora include metadata and watermarks. These markers are important for verification, but often they are not sufficient to confirm whether something has been manipulated or collected by artificial intelligence, according to a spokesperson from Reality Defender.
Google’s Gemini chatbot was until recently one of the most restrictive models in Tow’s testing. But the company seems to have shifted its policy on public figures with its October image generation release. Gemini now allows realistic image creation of Joe Biden, Trump, Elon Musk, Taylor Swift, and Cuomo.
Meta AI and Grok, xAI’s chatbot, were the least restrictive platforms in Tow’s testing. Meta declined to create an image or video of just one figure: Swift. Both Grok and Meta attempted to create an image of Mamdani, but failed to depict him accurately.

Unsurprisingly, Grok’s image generation allowed all the public figures Tow tested to be shown in protests. Since Musk’s takeover of X, the platform has removed guardrails and curtailed moderation. Grok allows users to create nearly any image they want, including sexually explicit content.
Although deepfakes didn’t end up playing a large role in the 2024 election, AI visuals perpetuate what a European report called the “pollution” of information. In the past two years, the report showed, AI infiltrated elections across eleven countries. Cuomo’s campaign video was not meant to deceive voters into thinking it was real, but it runs the risk of muddling the truth in an already precarious media environment. “It’s normalizing something that’s going to be really bad when used in malicious ways,” said Anika Collier Navaroli, a Columbia Journalism School professor and Tow Center fellow who has previously worked in content moderation. “There’s a hypocrisy to it.”
Has America ever needed a media defender more than now? Help us by joining CJR today.