Watchdog

After DeepNude, ideas for more conscientious coverage of synthetic media

September 11, 2019
 

Sign up for The Media Today, CJR’s daily newsletter.

In late June, Samantha Cole, a reporter for Motherboard, the Vice technology site, caught word that an anonymous programmer was planning to sell a contentious synthetic media tool. In an article headlined “This Horrifying App Undresses a Photo of Any Woman With a Single Click,” she described how the app, DeepNude, uses machine learning to transform photos of fully-clothed women into realistic nudes. This wasn’t a new beat for Cole; her reporting in late 2017 brought mainstream attention to deepfakes, helping spur social media websites such as Twitter and Reddit to deplatform non-consensual pornography and to clarify their community guidelines on exploitative sexual content. Cole was confident that her coverage would prompt platforms to take similar action against DeepNude. They did. A day later, the programmer pulled the app, declaring “the world is not yet ready for DeepNude.” Github and Discord, platforms that previously hosted copies of DeepNude and its spin-offs, moved to ban the app from their services. 

But while many hailed this as a success story, some concerned readers took issue with Motherboard’s coverage. A few, such as Kelly Ellis, a software engineer, suggested that Cole had taken an obscure, harmful app and made it go viral. (Indeed, after the Motherboard story went live, DeepNude’s homepage was flooded with traffic, temporarily bringing it down.) To show how DeepNude worked, Motherboard published partially-censored DeepNudes of celebrities including Kim Kardashian, Natalie Portman, Gal Gadot, and Tyra Banks. Readers questioned why Motherboard chose to violate these celebrities’ sexual privacy; editors rushed to remove the images and issued an apology. The editor’s note captures a dilemma faced by all tech journalists covering synthetic media: “We think it’s important to show the real consequences that new technologies unleashed on the world without warning have on people, but we also have to make sure that our reporting minimizes harm.”

ICYMI: Mapping the battleground for the next information war

Now media and technology observers are debating the threat deepfakes pose in the leadup to the 2020 US elections. While malicious actors are using publicly available AI-tools to harass others—particularly women and minorities—many journalists are grappling with difficult editorial choices. Not all synthetic media tools are as prone to abuse as DeepNude was. Still, moving forward, journalists need more than just training on how machine learning works; they also need a network of experts who can give evidence-based insights and help create standards for how to cover synthetic media tools. Our takeaways mirror, and draw from, the larger push for journalists to change how they report on white supremacy, misinformation, and online harassment campaigns. As with those issues, sexual exploitation is fundamentally a societal problem given another form by a new technology, and journalists play a powerful role warning against and preventing synthetic media abuses.

 

Consider image choice carefully

To show how far synthetic media advances have come, journalists commonly borrow the images of celebrities. In doing so, reporters risk perpetuating a cycle of exploiting public figures. A fake video of a celebrity will get clicks, sure, but readers may miss the bigger picture—that synthetic media could harm them, too. 

Sign up for CJR’s daily email

“Since the nineteenth century, media outlets have depended on coverage of celebrities to attract readers and revenue,” Sharon Marcus, a professor at Columbia University and author of a new book, The Drama of Celebrity, says. Many celebrities are accustomed to having their images exploited—that’s the business model of the entertainment industry. But in the twenty-first century, celebrities are up against a mainstream media willing to make and share celebrity deepnudes. That shift in norms pushes us closer to a digital environment anticipated by Danielle Citron, a law professor at Boston University, and Robert Chesney, from the University of Texas School of Law: that future celebrities will be drawn to digital comprehensive life logs” to establish alibis for every moment. 

Readers can speak up against this type of exploitation; in the case of Motherboard, they did. There’s still much more that can be done. Editors can set standards to limit the harm done to celebrities in the coverage of risky apps such as DeepNude. Journalists can reframe similar stories by choosing to write about the potential impact of legislation, such as California Senate Bill 564, which aims to ban the dissemination of nonconsensual, digitally created sex scenes and nude performances. At Slate, Shannon Palus argued that newsrooms might refrain from sharing fake nudes much as they might refrain from sharing graphic images of dead bodies.

Journalists already know how to tell stories about synthetic media tools without including product samples. Javier Zarracina, the graphics editor at Vox, describes the DeepNude images as “a little bit disturbing,” and encourages illustrators to be “as respectful and as measured as journalists are with their language.” (His own illustration for a Vox story about the app includes no direct mention of its logo or creator.) Sarah Hashemi, a motion graphic artist for The Washington Post, reflected that thinking when she intentionally chose imagery for a story by Drew Harwell that would demonstrate how DeepNude affects all women. “My job as the designer and animator was to make sure that I could visually convey what Drew reported,” Hashemi says, “to express the emotion and tone of his story and not to add any additional harm to the subject of his story and others who share her story.”

 

Don’t privilege the programmer

Journalists covering synthetic media risk amplifying the the developer’s perspective, in articles that sound more like advertisements for a new consumer product than accurate explanations of an emerging technology. One way to break that trend is to focus reporting on the harmful impact of synthetic media on individuals and society at large. 

Another way to break AI news is to provide human interest stories; for instance, a Singaporean news outlet led with a woman’s experience dealing with an online forum using DeepNude on her social media photos. Karen Hao at MIT Technology Review says she relies on a network of human rights activists and technologists, including experts at Witness and the Data & Society Research Institute, to inform many of her AI-related stories. The DeepNude case, Hao says, “very clearly illustrates some of the things that researchers have been warning for a while.” 

 

The Internet makes distribution easier, which makes responsible disclosure harder

The pressure to break news and attract clicks can get in the way of responsible journalism. But ethical journalists may need to use strategic silence (delay reporting, link sparingly). In a recent paper, Aviv Ovadya, founder of the Thoughtful Technology Project, and Jess Whittlestone, of the Leverhulme Centre for the Future of Intelligence, suggest that the machine-learning community look at fields such as biosafety and computer security for lessons on how to publish findings in a way that mitigates negative impacts. The same advice might apply to journalists, who could be more judicious when they take certain pieces to press. 

“You know when you do reporting on a data breach and you say, ‘Well, credit card numbers have been found in some scummy forum somewhere’? You don’t link to the forum,” Kendra Albert, a clinical instructor at Harvard Law School’s Cyberlaw Clinic, says. Similarly, journalists covering data breaches and revenge porn rarely point back to their sources. But the same care has yet to be taken with potentially harmful software. Worse, some outlets continue to report on new ways for users to download DeepNude-like software even after DeepNude’s programmer stopped sales.

Where code is hosted matters, too. “I have more concern linking to GitHub than the deepfakes’ websites,” says Albert. Because Github allows users to freely copy code, an app may more readily be distributed than it might have been via a paywalled website. Before publishing a followup piece for Motherboard on Github’s removal of DeepNude spinoffs, Joseph Cox waited to confirm that GitHub removed the offending repositories, in case readers sought out the code themselves. Responsible reporting should, at minimum, curb casual adoption of malicious technology.

Another concern in synthetic media reporting is how social media websites prioritize, deprioritize, and strip context from well-intentioned stories. An AI-manipulated image may appear without its caption in a Google search, and elsewhere around the Internet. “It’s irresponsible to put up an image that does not provide its own context,” Albert says. Placing descriptive, immutable watermarks on AI-manipulated media is a simple, important step journalists can take.

 

These stories are worth telling, but not sensationalizing.

Ovadya, who is known as the progenitor of the term “Information Apocalypse,” appreciates the need for some alarmism. “In 2016, it was important to draw attention to something previously ignored,” he says. “That was crucial, as some mitigations will take years to put in place. But now attention needs to focus on the solutions if we want to avoid the perils of reality apathy.”

The fear of inadvertently popularizing a malicious app should not stop tech journalists from telling stories about synthetic media. Telling stories about DeepNude, for instance, may be instrumental in changing the legal landscape. “I haven’t really yet seen this elevated in the public discourse and it really needs to be so people in decision-making power and technology-development power can realize the consequences of their actions,” says Hao. 

To that end, Hao focuses on explaining the technical process behind synthetic media systems so that people can start making sense of AI news when they see it. “My biggest mission is to convince people that [AI] is not magic,” Hao says. “I think the public perception is that people are scared of it because it doesn’t make sense to them and then it gets really overwhelming.” 

There’s no single approach that will ensure conscientious coverage of synthetic media. “Deciding what is and isn’t worth covering—what’s actually newsworthy, what’s potentially attention seeking from bad actors, and what’s simply too small to be covered or has been overly saturated with coverage—is something we do all day, every day,” says Motherboard’s Cole, in an email. “It is our core job as journalists.” As newsrooms grapple with synthetic media coverage, adopting these practices may help journalists investigate, not sensationalize; help, not harm; and inform, not exploit.

Read CJR’s “The Disinformation War” series here.

Dan Bateyko and Muira McCammon co-authored this story. Bateyko is an independent researcher and writer, and previously worked at the Berkman Klein Center for Internet & Society. You can follow him on Twitter @dbateyko. McCammon, a former journalist, is a doctoral student at the University of Pennsylvania's Annenberg School for Communication. She was previously a fellow at the Harvard Law Library Innovation Lab.