The Toxins We Carry

Disinformation is polluting our media environment. Facts won’t save us.

December 2, 2019

“We need to know why this is happening.”
“The best defense against lies is the truth.”
“Light disinfects.”

 

Statements such as these frequently emerge in the wake of the falsehoods, hoaxes, conspiracy theories, and racist attacks that roar across social media. We reach for facts as our antidote to misinformation, false and misleading stories that are inadvertently spread; or disinformation, false and misleading stories that are deliberately spread; or malinformation, true stories that are spread in order to slander and harm. But the problem plaguing digital media is larger than any of these individually; it’s that the categories overlap, obscuring who shares false information knowingly and who shares it thinking it’s true. With so much whizzing by so quickly, good and bad actors—and good and bad information—become hopelessly jumbled. Claire Wardle, cofounder of the nonprofit research group First Draft, describes the mess as “information pollution.” Democracy cannot function when what’s true can’t be distinguished from what’s trash, Wardle argues. Journalists know this better than anyone. And so they scramble to help their readers understand the facts of a story, as often and as loudly as they can.

As well-intentioned as these efforts might be, the impulse to respond to falsehood with facts is not a cure-all. The underlying assumptions—that people won’t share stories they know are false and that the truth is a natural corrective to lies—might seem unassailable in theory. But in practice our relationship to truth is more complicated. People don’t believe things, or share things, only because of facts. Facts are therefore ill-equipped to solve the problem on their own—and shining a light on what’s false can even, counterintuitively, make things worse by spreading falsehoods to more people, making those falsehoods seem more plausible to certain audiences, and generally ensuring that the story is more potent after the debunk than before. 

Those of us pointing out the relative powerlessness of facts aren’t taking some brave stance against the truth. Nor does this position suggest that the best response to violent and dehumanizing speech is to turn away, say nothing, and hope bigotry goes away on its own. The argument, instead, is that journalists—indeed that all citizens—must reexamine our most fundamental assumptions about how false and misleading information spreads and about the role we all play in spreading it. What we’ve been doing since 2016 hasn’t made things any better heading into 2020. And so we must adjust how we think about the problem. 

My proposal is that we begin thinking ecologically, an approach I explore with Ryan Milner, a communication scholar, in our forthcoming book You Are Here: A Field Guide for Navigating Polluted Information. From an ecological perspective, Wardle’s term “information pollution” makes perfect sense. Building on Wardle’s definition, we use the inverted form “polluted information” to emphasize the state of being polluted and to underscore connections between online and offline toxicity. One of the most important of these connections is just how little motives matter to outcomes. Online and off, pollution still spreads, and still has consequences downstream, whether it’s introduced to the environment willfully, carelessly, or as the result of sincere efforts to help. The impact of industrial-scale polluters online—the bigots, abusers, and chaos agents, along with the social platforms that enable them—should not be minimized. But less obvious suspects can do just as much damage. The truth is one of them.

 

Sign up for CJR's daily email

To understand just how unreliable an ally the truth can be when combating polluted information, we begin, naturally, with Satan. References to Satan and Satanism are woven throughout the reactionary right’s most prominent conspiracy theories. There’s the “Deep State” theory, which supposes that an evil, occultist, child-molesting government-within-a-government is attempting to destroy Donald Trump’s presidency from within. Then there’s “QAnon,” which chronicles Trump’s efforts to combat the Deep State. And, famously, there’s “Pizzagate,” which claimed that Hillary Clinton was running a Satanic child sex ring out of the back of a pizza shop in Washington, DC. Jeffrey Epstein, the billionaire financier and actual sex offender, who was accused of running a sex ring that included minors before he died by suicide in federal prison, was easily folded into this narrative.

Satanic conspiracy theories go much deeper than the dark corners of the internet, of course. They date back to the eleventh century, drawing from a subversion myth that warns of an evil internal enemy—nebulously described as “them”—hell-bent on destroying “us,” an in-group that almost always refers to white Christians. Over the centuries, these conspiracy theories—which are often explicitly anti-Semitic—have recurred regularly, notably in the Satanic Panics of the 1980s and ’90s. And for just as long, they have been stubbornly impervious to debunking. Throw a fact check at a subversion myth, and it will transform into proof for believers. After all, trying to disprove the existence of a Satanic plot is exactly what a Satanist would do.   

Social-psychology research helps explain why facts are so ineffective at dislodging false beliefs. In a 2012 analysis, a team led by Stephan Lewandowsky, a psychologist, argued that when people are presented with new information, their brains are focused less on granular empiricism and more on running a kind of coherency check. Information that is “knowledge consistent”—meaning it aligns with the things a person already believes—tends to be accepted as true. For example, a person is likely to accept the claim that “no evidence has ever been found to support the existence of Satanic ritual abuse by top Democrats” only if these findings align with their existing assumptions about Satan, Satanism, and Democrats. 

Facts are ineffective at dislodging false beliefs. 

Conversely, a person who already believes in Satanic conspiracies (or who wouldn’t put it past the Democrats) will be inclined to reject the same claim. Even more alarming, those in the Satanic-conspiracy camp are primed for what danah boyd, a social scientist, calls the “boomerang effect,” which occurs when a person who already believes that an information source is biased becomes more convinced of a claim after that source debunks it. So, if a person thinks the New York Times is Deep State–adjacent fake news, and the Times publishes an article discrediting QAnon or Satanic ritual abuse by the Clintons, that person is likely to come away feeling that there must be some truth to the accusations—because, from their vantage point, journalists are liars (and may even be in on the conspiracy themselves). This mistrust, in turn, often prompts readers already skeptical of mainstream journalists to conduct their own investigations, sending them down algorithmic rabbit holes serving up increasingly extremist, factually unmoored content. 

Another powerful psychological response that undermines fact-checking is the illusory-truth effect. This principle was first identified by a team of psychologists led by Lynn Hasher in 1977. It shows that when people are repeatedly exposed to false statements, those statements start to feel true, even when they are countered with evidence. In short, a fact check is no match for a repeated lie. Additional research has shown that the illusory-truth effect can take hold even when people know a statement is false from the outset. Worse, the effect can happen not in spite of debunking but as a result of it, when a person’s brain fails to store the “…is not true” part of the conversation in long-term memory. The title of a 2005 article in the Journal of Consumer Research pretty well sums up the phenomenon: “How Warnings About False Claims Become Recommendations.”     

The varied motivations of those who share polluted information complicate the problem even further. In the weeks following the 2016 election, a Pew Research Center survey revealed that 14 percent of adults admitted to sharing a false political news story even though they knew it was false at the time they shared it. This statistic aligns with the deeply questionable motives of many QAnon proponents who post to 4chan and 8chan, both Superfund sites of information pollution. The problem certainly isn’t restricted to these forums. As boyd notes, “If you talk with someone who has posted clear, unquestionable misinformation, more often than not, they know it’s bullshit. Or they don’t care whether or not it’s true. Why do they post it, then? Because they’re making a statement.” 

In these cases, fact-checking isn’t going to do a single thing to correct the falsehoods. Facts have, quite literally, nothing to do with it. 

 

Social networking platforms further exacerbate problems of truth and belief for a basic reason: they are, as Wardle explains, designed to maximize the spread of information regardless of its truth-value. Encouraging information to career as quickly as possible between as many audiences as possible is the entire point—and overall business strategy—of social media companies.  

One of the consequences of all this frictionless sharing is “context collapse,” a term describing how the individuals who make up an online audience are obscured at any specific moment, and how present and future audiences can unpredictably commingle. In other words, an audience is never a singular whole; it consists of an unknown number of smaller, sometimes conflicting, audiences. Another consequence is Poe’s Law, an internet axiom dating back to 2005, when a creationism forum user going by the name of Nathan Poe observed how difficult it is to determine whether someone is being serious or satirical absent the contextual clues of communicating in person. 

Poe’s Law and context collapse ensure that little can be known about online audiences. This muddles what information needs fact-checking in the first place. Are posters making serious claims? Are they having a bit of stupid fun? Are they attempting to drum up followers? Are they trying to trick journalists? Some combination? Something else? Different answers would, ideally, trigger different journalistic responses, but that would require reporters to be able to accurately assess the meaning of a message—to say nothing of an entire crush of messages that could be doing a hundred things at once, depending on who’s posting and who’s watching.  

Without knowing exactly what merits fact-checking, journalists are also unable to anticipate what impact their fact checks will have on different audiences. A fact check can simultaneously trigger the boomerang effect, incentivize worse falsehoods, be absolutely useless, and be helpful, all depending on what audiences receive the fact check in what ways. As a result, media manipulators relish fact checks by reporters at mainstream papers. Not only do fact checks ensure national exposure for false claims, they create a great deal of confusion about who believes what, how many believers there are, and how serious those believers are being. This tension was on full display last year when, as reported by the Washington Post, a popular QAnon YouTuber giddily rattled off the names of the news outlets covering (and fastidiously debunking) the QAnon story. “I haven’t been this happy in a very long time. CNN, NBC News, MSNBC, PBS NewsHour, Washington Post…those are our new QAnon reporters!” He paused, then burst out laughing.

Reporters seek to debunk polluted narratives like QAnon because they believe that shining a light on falsehood will disinfect it. This works for some audiences. But for others, shining a light does not disinfect. It illuminates pollution so that it is seen by more people and, in the process, risks catalyzing even worse pollution flows.

 

Just as throwing good money after bad isn’t likely to save a failing business, throwing good information after polluted information isn’t likely to mitigate its toxicity. When journalists center their stories on apostles of disinformation rather than the downstream effects of their lies; when they focus on individual toxic dump sites rather than the socio-technological conditions that allow the pollution to fester; when they claim an objective view from nowhere rather than consider the effects of their own amplification, they—along with any citizen on social media—can be every bit as damaging as people who actively seek to clog the landscape with filth.

An ecological approach to polluted information avoids these pitfalls. Individual stories, bad actors, and technologies are part of the conversation, but they are not the most important part. The most important part is how our systems, our actions, and our institutions intertwine in ways that create perfect conduits for pollution to flow unchecked. If we fail to identify the true problem, any solutions we implement will fail in turn. The call, in short, isn’t “don’t report the news.” It’s to use a wider lens in order to tell bigger truths. 

When I asked experts in media manipulation to describe their greatest worries about the spread of polluted information, they cited the tendency within journalism (as well as the technology sector) to respond to problems as self-contained rather than structural. In other words, to see trees, not forests. Becca Lewis, a researcher at Stanford University who has written extensively about YouTube’s reactionary far right, told me that disinformation and radicalization “actually emerge as a result of cultural, technical, and economic forces working together.” In the case of YouTube’s recommendation algorithm, she explained, it’s not like the algorithm itself flips the radicalization switch. Radicalization is as much a function of neoliberal logics incentivizing creators to make increasingly extremist content, poorly defined and enforced moderation policies, and mutually radicalizing relationships between YouTubers and their audiences. Stories that drill down into algorithms and the extremist content they surface might help readers understand the videos in question. But focusing on the symptoms doesn’t get us any closer to the structural causes. 

Alice Marwick, a professor of communication at the University of North Carolina at Chapel Hill who has also emphasized the socio-technological forces propelling polluted information, talked to me about the dangers of making false distinctions between seemingly inconsequential and serious disinformation. Reporters tend to treat stories about UFOs, or outlandish accusations against politicians, or viral hoaxes, as silly absurdities or as isolated events, wholly disconnected from obviously damaging falsehoods about subjects like the climate crisis or vaccinations. But that’s a mistake, Marwick wrote in an email: “Even something that fuels the less-crazy end of partisanship can contribute to spreading disinformation that has real consequences.” Not only do these seemingly less serious stories provide a conceptual blueprint for other manipulators to follow, they contribute to the perception that truth is up for grabs. As Marwick and Lewis have both previously argued, disinformation of any kind clogs the ecosystem with falsehoods, making it less likely that the public will trust the news when critical truths are reported.  

Reporters believe shining a light on falsehood will disinfect it. This works for some audiences. But for others, shining a light illuminates pollution so that it is seen by more people.

Danielle Keats Citron, a law professor at Boston University, highlighted another consequence of environmental myopia. Citron predicts that deepfakes, imperceptibly doctored videos, will become increasingly dangerous heading into the 2020 election. The obvious concern is the misperceptions and lies that the deepfakes will generate. The less obvious but just as critical concern is how focus on the deepfakes themselves will obscure how they spread through feedback loops between journalism and social media. While large news outlets have reporters assigned to the disinformation beat who will likely be prepared to recognize fake videos, reporters at smaller outlets are unlikely to have the same kind of training, Citron told me. The risk is that manipulators will seed deepfakes with these “softer targets,” in the hopes of triggering dissemination across social media when they are reported—and in turn triggering coverage by larger outlets when the deepfake goes viral. So long as journalists are focused solely on the content of the deepfake rather than the overlaps between social media and journalism, as well as the overlaps between different kinds of journalism, manipulators will always have the upper hand.   

 

The negative effects of all this toxicity are not equally distributed. For decades, the environmental justice movement has demonstrated that communities of color and other historically marginalized groups are more likely to drink contaminated water, breathe polluted air, and be saddled with hazardous waste than their richer, whiter neighbors. So it goes online.

Shireen Mitchell, a technology analyst and the founder of Stop Online Violence Against Women, has spent years studying online disinformation. She told me that campaigns that target people of color are more vicious, more sustained, and more systematic. But that’s not how they tend to be treated by reporters, compounding the dangers faced by these communities. 

Mutale Nkonde, an AI policy analyst and a fellow at the Berkman Klein Center at Harvard University, described in an email one of the primary reasons communities of color face such unique threats. “All online disinformation,” she said, “starts with the weakest people in society to test its capabilities.” Traditionally underrepresented communities are especially vulnerable; they are least likely to have the resources to fight back, and least likely to generate widespread public sympathy when they are attacked. Black women in particular, Nkonde explained, are the most frequent targets of these campaigns—because, as Nkonde flatly stated, “no one listens to them.” 

The Donglegate campaign in 2013, followed by the Gamergate campaign in 2014, provides a striking example. Not only was coordinated hate and harassment against Black women ignored by the social platforms that facilitated the attacks, these campaigns served as proof of concept for Russian disinformation efforts during the 2016 election. Had social platforms listened to Black women in 2013, says Mitchell, the dynamics of 2016 could have been very different.

The disproportionate damage inflicted on communities of color is reason enough to take these attacks seriously. Ecological thinking heightens these stakes by emphasizing a simple truth: all our fates are connected. The waves of pollution that inundate marginalized groups first will eventually come for everyone else—and when they do, they’ll have extensive research and development on their side.

By emphasizing the downstream consequences of polluted information online, ecological thinking foregrounds how identity-based attacks, half-truths, and outright lies affect bodies offline. But the people affected by all that pollution aren’t the only bodies worth considering—equally important are the bodies reporting on it. In the fall of 2017, I interviewed reporters for my Oxygen of Amplification project, which presents a set of better practices for reporting on extremists, manipulators, and abusers online. Many of these reporters told me that some of their colleagues were unable and at times outright unwilling to appreciate the gravity of online harassment and disinformation campaigns due to their limited personal experiences with bigoted harassment. Reporters who are white, they told me (many of the reporters leveling this critique were white themselves), and particularly reporters who are white, male, cisgender, and straight, have a tendency to approach these problems with a cavalier or patronizing attitude—as something “interesting,” a riddle to solve, as opposed to a threat to public health. In some cases, they simply do not understand the stakes for bodies under threat; or, they understand, but consider their personal responsibility to start and end with the articles they publish. However well-intentioned the reporter might be, this kind of arm’s-length reporting amplifies abuse against vulnerable communities, normalizes identity-based attacks, and ensures that more harms are just around the corner.

When journalists focus on individual toxic dump sites rather than socio-technological conditions, they can be every bit as damaging as people who seek to clog the landscape with filth.

Speaking to the impact on trans people in particular, Gillian Branstetter, a media relations manager for the National Center for Transgender Equality, offered an instructive image: “If you’re a cis person writing about transphobia, it’s like you’re a visitor to an aquarium looking at the shark through the nice, solid glass. When you’re a trans person writing about transphobia, it’s more like you’re in the ocean swimming with the shark. While you are more conscious of the threat it poses, you’re also able to take a closer, more nuanced look at it in its natural habitat.” 

For those swimming in the ocean, this shark isn’t merely interesting, and it certainly isn’t a riddle to unpack. It’s an embodied threat. In some cases, it’s a matter of life and death. And it’s not an isolated concern. The more pollution there is clouding the water, the more threats are able to lurk undetected—until it’s too late.

This is the final takeaway of ecological thinking. The biggest mistake any reporter, or any citizen, can make is to assume that we’re standing outside our environment. We are all, always, right in the middle of it. To have any hope for a different future, we must survey the landscape, consider where our own bodies stand, and ask: How might what I do here affect what happens over there?

Whitney Phillips is an assistant professor of communication and rhetorical studies at Syracuse University. She is the author of This Is Why We Can’t Have Nice Things: Mapping the Relationship Between Online Trolling and Mainstream Culture (2015) and the coauthor of The Ambivalent Internet: Mischief, Oddity, and Antagonism Online (2017).