analysis

Legislation aimed at stopping deepfakes is a bad idea

July 1, 2019

The latest disinformation buzzword on everyone’s lips is “deepfake,” a term used to refer to videos that have been manipulated using computer imaging. (The word is a combination of “deep learning” and “fake.”) Using relatively inexpensive software, almost anyone can create a video whose subject appears to say or do something they never said or did. In one of the most recent examples, a Slovakian video artist named Ctrl Shift Face modified a video clip of comedian Bill Hader imitating Al Pacino and Arnold Schwarzenegger, so that Hader’s face morphs into that of the actors while he is imitating them. In another, a pair of artists created a deepfake of Facebook co-founder and CEO Mark Zuckerberg making sinister comments about his plans for the social network.

Technologists have been warning about the potential dangers of deepfakes for some time now; Nick Diakopolous, an assistant professor at Northwestern University, wrote a report called “Reporting in a Machine Reality” last year about the phenomenon. As the US inches closer to the 2020 election campaign, concerns have grown. The recent release of a doctored video of House Speaker Nancy Pelosi—slowed down to make her appear drunk—also fueled those concerns, although the Pelosi video was what some people have called a “cheapfake” or “shallowfake,” since it was obvious it had been manipulated. At a conference in Aspen this week, Mark Zuckerberg defended the fact the social network didn’t remove the Pelosi video, although he admitted it should not have taken so long to add a disclaimer and “down rank” the video so it wasn’t promoted by the News Feed algorithm.

US legislators, riding a wave of concern about this phenomenon, say they want to stop deepfakes at the source. So they have introduced something called the DEEPFAKES Accountability Act. (in a classic Congressional move, the word “deepfakes” is capitalized because it is an acronym—the full name of the act is the Defending Each and Every Person from False Appearances by Keeping Exploitation Subject to Accountability Act). The law would make it a crime for anyone to create and distribute a piece of media that makes it look as though someone said or did something they didn’t say or do without including a digital watermark and text description that states it has been modified. The act also gives victims of “synthetic media” the right to sue the creators and “vindicate their reputations.”

ICYMI: NYT criticized for ‘callous disregard’ in photo choice, and the issue of images selected by the press

Mutale Nkonde, a fellow with the Berkman Klein Center at Harvard and an expert in artificial intelligence policy, advised Congress on the DEEPFAKES Accountability Act and wrote in a post on Medium that the technology “could usher in a time where the most private parts of our lives could be outed through the release of manipulated online content — or even worse, as was the case with Speaker Pelosi, could be invented [out of] whole cloth.” In describing how the law came to be, Nkonde says that since repealing Section 230 of the Communications and Decency Act (which protects the platforms from liability for third-party content) would be difficult, legislators chose instead to amend the law related to preventing identity theft, “putting the distribution of deepfake content alongside misappropriation of information such as names, addresses, or social security numbers.”

Not everyone is enamored of this idea. While the artists who created the Zuckerberg video and the Hader video might be willing to add digital watermarks and textual descriptions to their creations identifying them as fakes, the really bad actors who are trying to manipulate public opinion and swing elections aren’t likely to volunteer to do so. And it’s not clear how this new law would force them, or make it easier to find them so they could be prosecuted. The Zuckerberg and Hader videos were also clearly created for entertainment purposes. Should every form of entertainment that takes liberties with the truth (in other words, all of them) also carry a watermark and impose a potential criminal penalty on creators? According to the Electronic Frontier Foundation, the bill has some potential First Amendment problems.

Sign up for CJR's daily email

Some believe this type of law attacks a symptom rather than a cause: the overall disinformation environment on Facebook and other platforms. “While I understand everyone’s desire to protect themselves and one another from deepfakes, it seems to me that writing legislation on these videos without touching the larger issues of disinformation, propaganda, and the social media algorithms that spread them misses the forest for the trees,” Brooke Binkowski, the former managing editor of fact-checking site Snopes.com who now works for a similar site called Truth or Fiction, says. What’s needed, she says, is legislation aimed at all elements of the disinformation ecosystem. “Without that, the tech will continue to grow and evolve and it will be a never-ending game of legislative catch-up.”

ICYMI: Right-wing publications launder an anti-journalist smear campaign

A number of experts, including disinformation researcher Joan Donovan of Harvard’s Shorenstein Center (who did a recent interview on CJR’s Galley discussion platform), have pointed out that you don’t need sophisticated technology to fool large numbers of people into believing things that aren’t true. The conspiracy theorists who peddle the rampant idiocy known as QAnon on Reddit and 4chan, or who create hoaxes such as the Pizzagate conspiracy theory, haven’t needed any kind of specialized technology beyond storytelling skill. Neither did those who promoted the idea that Barack Obama was born in Kenya. Even the Russian troll armies who spread disinformation to hundreds of millions of Facebook users during the 2016 election only needed a few fake images and plausible-sounding names.

There are those, including Joshua Benton, director of the Nieman Lab at Harvard, who don’t believe deepfakes are even that big a problem. “Media is wildly overreacting to deepfakes, which will have almost no impact on the 2020 election,” Benton said on Twitter after the Pelosi video sparked concern about deepfakes swamping voters with disinformation. Others, including the EFF, argue that existing laws are more than enough to handle deepfakes. In any case, rushing forward with legislation aimed at correcting a problem before it even becomes obvious what the scope of the problem is—especially when that legislation has some obvious First Amendment issues—doesn’t seem wise.

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.