One of the most challenging problems of the digital information age is how to report on disinformation without pouring gasoline on the fire in the process. While working with the New York-based group Data & Society, media analyst Whitney Phillips (now an assistant professor of communications at Syracuse University) wrote a comprehensive report on this challenge, entitled The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators. We recently asked Phillips to join us for an interview on our Galley discussion platform; our conversation unfolded over several days.
The idea that journalists can exacerbate problems merely by doing their jobs is somewhat more widely accepted now, thanks in part to the work of Phillips and Joan Donovan, who runs the Technology and Social Change Research Project at Harvard’s Shorenstein Center (and also did an interview with CJR on Galley recently). After the recent mass shooting incident in New Zealand, a number of media outlets chose not to focus on the shooter, and didn’t publish or link to his “manifesto.” In some cases, news outlets didn’t even use his name, which is a big change from just a few years ago. But Phillips says there is more to be done.
“I’ve been considering these questions for the better part of a decade and I still find them vexing,” she says. There are some basic guidelines that are comparatively clear, including efforts to avoid publicizing anything that hasn’t yet met its tipping point and moved from a discrete online community to the center of broader discussions. A mass shooting will cross that point immediately, but that doesn’t mean reporters should report everything about the incident. Of particular concern, says Phillips, are ways of framing the story that “aggrandize the shooter/antagonist, or otherwise incentivize future shooters/antagonists.”
Some news outlets have argued that they need to report on the personal details and background of extremists such as the Christchurch shooter because we need to understand how they were radicalized. But while this kind of understanding might help in some cases, Phillips says it is going to fail in others, because “radicalization is a choice, and changing people’s minds about the things they actively choose is a long-term, up-close-and-personal, complicated ground game, not something you can solve by waving a newspaper article at someone.” Writing in detail about how they were radicalized might be seen by like-minded extremists as a reward rather than punishment.
We know that, in some cases, “sunlight disinfects”; exposing wrong-doers can sometimes cause them to lose their power. But in other cases, Phillips notes, it can function as a hydroponic grow light, “and it’s simply not possible to know what the long term effect of reporting will be. By then, it might be too late to intervene, because what ended up growing turned out to be poison.” Currently, when reporting on online extremism, journalists tend to focus almost exclusively on white supremacists and violent manipulators. But why? “At what point did we internalize the idea that attackers and liars and racists are the most interesting and important parts of a story?” she asks.
If the goal is to undermine a violent ideology like white supremacy, Phillips says, you don’t do that by only talking about white supremacists. “That keeps them right where they want to be, which is central to the narrative.” What we should be doing is showing the effects of white supremacy. Many people only know about racism as an abstraction, says Phillips. “But it’s not an abstraction. It’s bleeding bodies. It’s screaming babies. It’s synagogues and mosques on lockdown. Those stories need telling.” Better to spend more time reporting those kinds of details, rather than another profile that amplifies the messaging “of some violent asshole whose actions tell us everything we need to know.”
ICYMI: Just say ‘racist’
Part of the challenge of fighting misinformation is that we all believe things that turn out to be wrong, and often when we are challenged about those beliefs, we cling to them even more firmly. “Well intentioned interventions, outfitted with true and important facts, often go unheeded, and can actually compel a person to double down and feel even more convinced that they’re right and everybody else is wrong,” Phillips says. That’s why fact-checking efforts can have a boomerang effect and actually entrench a false belief in some cases. On top of that, studies have shown that repeating a message, even while debunking it, can reinforce the message and paradoxically make it seem more believable.
“Efforts to fact check hoaxes and other polluted information operate under the assumption that objective truth is a magic bullet [which] goes right into readers’ brains, without any filter, without any resistance, and fills in the holes that bad information leaves behind,” Phillips says. According to this theory, the problem of disinformation can be solved by handing out facts. But that’s not how human nature works. “When something ugly emerges from the depths, you simply cannot throw facts at it and expect anything transformative to happen—most basically because there is, across and between groups, no agreement about what the facts even are.”
There are even more complicating factors, Phillips says. According to one study of “fake news,” almost 15 percent of social-media users shared false or misleading stories even though they knew they were untrue. And in many cases people do this because they want to send a message about who they are or what they believe, in order to show that they are part of a specific group. “Media literacy discussions within journalism and academia tend to presume good faith in these kinds of cases, and proceed from there,” she says. “But people don’t always operate under good faith. In my line of work in particular, bad faith arguments and actions are everywhere.”