The Media Today

A new study reignites the debate over Twitter, bots, and 2016

January 19, 2023
Photographer: Andrew Harrer/Bloomberg via Getty images

Donald Trump’s victory in the 2016 presidential election saw the emergence of a virtual cottage industry—or perhaps even a real, full-size industry—bent on distributing blame for his win. Social media was one of its primary targets. The argument—in congressional hearings and academic treatises alike, not to mention on social media—was that “fake news” spread by Russian trolls helped get Trump elected. More recently, however, various researchers have poked holes in this argument. The latest to do so published a study in Nature last week entitled “Exposure to the Russian Internet Research Agency foreign influence campaign on Twitter in the 2016 US election and its relationship to attitudes and voting behavior.”

Six researchers, from universities in the US, Ireland, Denmark, and Germany, coauthored the study, which correlated survey data from some fourteen hundred respondents with Twitter data showing those users’ exposure to posts from Russian foreign influence accounts. The study reached a number of striking conclusions. Exposure to Russian disinformation, it found, was heavily concentrated, with 1 percent of Twitter users accounting for 70 percent of exposure, which was also concentrated among users who strongly identified as Republicans. The researchers also found that exposure to Russia’s influence campaign was “eclipsed by content from domestic news media and politicians” inside the US. In sum, the study said, “we find no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior.”

ICYMI: Margaret Sullivan on the coverage of Biden’s documents

For some observers, the study served as vindication of the belief that anguish over foreign disinformation was misplaced, or even fraudulent, from the beginning, and was weaponized as an excuse to force social media to censor information. Glenn Greenwald, the noted Twitter gadfly, said (on Twitter, of course) that “Russiagate was—and is—one of the most deranged and unhinged conspiracy theories in modern times. It wasn’t spread by QAnon or 4Chan users but the vast majority of media corporations, ‘scholars,’ think tank frauds, and NYT/NBC’s ‘disinformation units.’” (Elon Musk, Twitter’s owner and CEO, responded: “True.”) Other observers argued, to the contrary, that the methodology of the Nature study might not support such a sweeping conclusion about foreign influence in 2016: “Why would you build the study around Twitter when Facebook was the focus for these influence campaigns in 2016?” Jack Holmes, a writer at Esquire, asked. “This is like studying a minor stream to learn about the Mississippi River,” Holmes added.

According to the Nature study, we should be skeptical of more than the Russians and Twitter when it comes to narratives around factors that influence political behavior: the authors argue that election campaigns in general have a poor record of doing so. “The large body of political science research that examines the effects of traditional election campaigns on voting behavior finds little evidence of anything but minimal effects,” they say. However, the researchers also acknowledge that the Russian bot activity could have created second-order effects. Debate as to whether the 2016 election was rigged has “engendered mistrust in the electoral system,” they argue. “In a word, Russia’s foreign influence campaign on social media may have had its largest effects by convincing Americans that its campaign was successful.”

Others have argued that the influence of social media, if it exists, is just a small part of a much broader problem with the political ecosystem. In 2018, a group of academics, including Yochai Benkler of Harvard’s Berkman Klein Center (who later wrote for CJR about mainstream-media narratives around mail-in voting ahead of the 2020 election), published a book titled Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. The authors argued that it’s misleading to try to pinpoint a specific actor or vector, such as Russian agents or Twitter or Facebook, as the cause of the election outcome in 2016. Mike Masnick, of Techdirt, summarized the book in a review: “It’s not that the social media platforms are wholly innocent. But the seeds of the unexpected outcomes in the 2016 US elections were planted decades earlier, with the rise of a right-wing media ecosystem that valued loyalty and confirmation of conservative values and narratives over truth.”

Sign up for CJR's daily email

In 2019, I wrote about a study by Brendan Nyhan of the University of Michigan (who has also written for CJR in the past) looking at the reach of misinformation. Nyhan said that, according to his data, so-called “fake news” reached only a tiny proportion of the population before and during the 2016 election. Nyhan concluded, similarly to the authors of the recent Nature study, that “no credible evidence exists that exposure to fake news changed the outcome of the 2016 election.” In September 2020, Joshua Yaffa, the Moscow correspondent at The New Yorker, also explored whether Russian disinformation was really as threatening as many seemed to think, arguing that the online trolling tactics of the Internet Research Agency seemed to be aimed primarily at “scoring points with bosses and paymasters in Russia as much as influencing actual votes” in the US. (My CJR colleague Jon Allsop wrote about Yaffa’s article at the time.)

Whatever we might think of its conclusions, the Nature study, like other discourse around the 2016 election, forms just a small part of a much broader, still very relevant discussion around how (or whether) social media content of any kind, including disinformation, affects our behavior—and what (if anything) social media platforms should do about it. In November 2021, Joe Bernstein, now a reporter at the New York Times, wrote a cover story for Harper’s magazine. Bernstein argued that much of the discourse around online misinformation has tried to paint users of social media as gullible rubes who are easily manipulated by sophisticated algorithms, and that terms like disinformation “are used to refer to an enormous range of content, ranging from well-worn scams to viral news aggregation.” In their crudest use, Bernstein wrote, such terms “are simply jargon for ‘things I disagree with.’”

At the time, Bernstein’s article sparked a widespread conversation within the media industry about whether disinformation, while a real problem, had become a subject of panic and hype. That conversation has continued. This week, Alex Stamos, the former head of digital security for Facebook and now the director of the Stanford Internet Observatory, said in an interview with Peter Kafka, of Vox, that he thinks there has been a “massive overestimation of the capability of mis- and disinformation to change people’s minds.” Like Bernstein, Stamos acknowledged that disinformation is a problem. But he thinks that we need to reframe how we look at it—to see disinformation less as something that is inflicted on passive victims, and more as a problem of supply and demand. “We live in a world where people can choose to seal themselves into an information environment that reinforces their preconceived notions,” Stamos said. “In doing so, they can participate in their own radicalization. They can participate in fooling themselves, but that is not something that’s necessarily being done to them.”

Stamos also told Kafka that he believes there is a legitimate complaint behind the so-called Twitter Files—internal documents that Musk has recently released, through journalists like Matt Taibbi and Bari Weiss, to help him make the case that his company’s previous management bent to the whims of the US government when it came to removing information about covid-19 and other sensitive topics. (I wrote about this last week.) “Twitter was trying too hard to make American society and world society better, to make humans better,” Stamos said, referring to the complaint that derived from the Twitter Files. Rather than removing content or banning accounts, Stamos argues, social media platforms should focus on whether their algorithms are actively making things worse. “If somebody is into QAnon, you do not recommend to them, ‘Oh, you might want to also storm the Capitol,’” Stamos offered. “That is very different than going and hunting down every closed group where people are talking about ivermectin.”

If nothing else, the Nature study and arguments like Bernstein’s and Stamos’s serve as a useful corrective to what can often seem like an unhealthy obsession with the idea that Russian bots and “fake news” are at the root of every evil in our politics. Paying attention to foreign actors meddling in US politics is important, and so is tracking and debunking dangerous online misinformation. But blaming Twitter and Facebook for all of our political and social ills is reductive in the extreme. As the Wall Street Journal put it in an editorial on the Nature study: “Maybe the truth is that Mr. Putin’s trolls are shouting into the hurricane like everybody else.”


Other notable stories:

ICYMI: The State Department says ‘Turkey’ is ‘Türkiye’ now. What does the press say?

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.