How do you address the fears that the government is going to take away those assault rifles?” a reporter asked Beto O’Rourke on the Saturday morning before Labor Day, outside a campaign stop in Charlottesville, Virginia. O’Rourke was fielding questions about guns, which, after a mass shooting in his hometown of El Paso, had become central to his presidential bid. “That’s exactly what we’re going to do,” he replied.
That afternoon, back in West Texas, another gunman with another assault rifle embarked on a shooting spree, killing seven people and wounding twenty-five. Soon after the man’s name was reported, a suspicious-looking Twitter account added dramatic details to the emerging narrative: “The Odessa Shooter’s name is Seth Ator, a Democratic Socialist who had a Beto sticker on his truck.” Law enforcement officials quickly told reporters that there was no such sticker, and journalists discovered that Ator had been registered in Texas as an unaffiliated voter.
Yet by early Sunday afternoon, the unsourced rumor about Ator was already in full flight online; one Facebook account that had been quick to latch on boasted 34,000 shares after repeating the story—despite its usual fare (Trump memes) typically receiving likes and shares in the single digits. The original post—by an account purportedly belonging to a seventy-two-year-old woman from Mesa, Arizona, with fewer than six hundred followers—racked up more than 15,000 likes and 11,000 retweets. It was lent credibility by people such as Anthony Shaffer, a former Defense Intelligence Agency officer and member of President Trump’s 2020 campaign advisory board. Shaffer found a moral to the story: “This supports my belief that Progressives should be prohibited from owning or having access to weapons,” he tweeted. “They clearly cannot be trusted with the responsibility.”
Given the Russians’ astonishing success in helping to boost Trump to the presidency, it was only a matter of time before there would be domestic imitators.
Later on Sunday, visual “evidence” of Ator’s allegiances was posted on Facebook and fanned across platforms, shared by thousands: a picture of a white truck with a BETO 2020 decal in the back window. The image, it turned out, had been plucked from a campaign-decal store on Etsy. (The vehicle actually used in the shootings was a hijacked US Postal Service truck.) By Monday morning, when a tracker at the Democratic National Committee’s new unit combating disinformation alerted the O’Rourke campaign to the hoax, it was far too late to quell it.
Lauren Hitt, who was O’Rourke’s rapid-response director, recalls that the campaign notified Facebook and Twitter, identifying the account that had launched the rumor. “We did not hear from them—even once it was reported on,” Hitt says. “Twitter left the original post up. Facebook notified the people who shared the post that it was false. But the likely thousands of other people who saw the post but didn’t share it got no notification.”
The O’Rourke campaign confronted a hard reality that several still-active 2020 candidates have also faced, says Simon Rosenberg, a former Obama administration adviser who led a pilot effort to counter disinformation for the Democratic Congressional Campaign Committee (DCCC) during the 2018 midterms. “These things are brush fires,” he tells me. “Once those fraudulent accounts light the fire, and regular people start sharing and amplifying it, how do you combat that?”
Every presidential candidate who has garnered significant support in the 2020 polls has been targeted by disinformation attacks. Last December, when Senator Elizabeth Warren became the first major figure to announce her candidacy, Storyful, a social media intelligence firm, registered spikes in “spam or bot-like manner” on Twitter and found a host of messages on 4chan and 8chan, a pair of right-wing chat sites, full of advice on how to target her campaign with false claims. A few days after Warren cracked open a beer and hosted a New Year’s Eve Q&A from her kitchen, a post on Reddit showed her in front of a cabinet full of tchotchkes, side by side with what appeared to be a close-up of one of the items: a figurine of a grinning Black boy eating watermelon. Tomi Lahren, a Fox News contributor, tweeted out the images, asking, “Am I seeing this correctly?”
She was not. The Warren campaign hustled out a tweet with two photos of the kitchen cabinet, showing that the “figurine” was actually a vase bearing no racist imagery. By then, however, the blackface meme was all over Instagram and Facebook, in part courtesy of the New York City Republican Party, which added the comment: “Racist much?”
By late April it was Pete Buttigieg’s turn. Shortly after Buttigieg, the mayor of South Bend, Indiana, had neared double digits in national polls, someone purporting to be a young Republican college student in Michigan posted a story on Medium with a shocker of a headline: “Pete Buttigieg Sexually Assaulted Me.” As soon as the post went up, Big League Politics, a far-right site, took it to Twitter: “Uh oh: BREAKING: Media Darling Buttigieg Accused of Sexual Assault.” The story was a hoax, engineered by a pair of right-wing trolls, Jacob Wohl and Jack Burkman, who had previously announced “explosive” (and just as fabricated) evidence of sexual misconduct by Special Counsel Robert Mueller and this fall trotted out an absurdly fake contention that Warren had had an affair with a twentysomething bodybuilder.
The same day as the Buttigieg attack, Joe Biden, the former vice president, was endorsed by the International Association of Fire Fighters. On Twitter, @realDonaldTrump fumed to his sixty-five million followers about the “Dues Sucking firefighters leadership” backing Biden “even though the membership wants me.” Two days later, the president went on a tear, retweeting almost sixty responses to a post by Dan Bongino, a former Secret Service agent and Fox News commentator, in which he claimed, “NONE of the Firemen I know are supporting Joe Biden for President.” Trump’s retweet-storm was widely reported, yet only a couple of outlets pointed out that some of the accounts he promoted, if not most, were fake. “I looked at the first twenty-five accounts he sent out,” Rosenberg says, “and none of them looked real. Not one.” Darren Linvill, an associate professor at Clemson University who analyzes social media disinformation, also found fakes among the tweets endorsed by Trump. It was yet another sign, Linvill told ABC News, that disinformation in the 2020 campaign was sure to be “orders of magnitude worse than it was in 2016.”
Given the Russians’ astonishing success in helping to boost Trump to the presidency, it was only a matter of time before there would be domestic imitators. In 2017, the first major American cyberpropaganda efforts were launched by Democratic operatives and tech firms targeting Roy Moore, a Republican running for Senate in a special election in Alabama. In a secret effort dubbed Project Birmingham, they set up a Facebook page with fake conservatives expressing their worries about Moore, encouraging Republicans to write in other candidates and running a false-flag operation to start a rumor—picked up on by mainstream media outlets—that Moore’s campaign was being promoted by Russian bots. A separate stealth effort created a “Dry Alabama” Facebook page, designed to fool voters into thinking that Moore wanted to “Re-Enact Prohibition.” And a pro-Democratic firm, Tovo Labs, experimented with what it called “ethical” applications of Russian tactics—driving conservative Alabamians to real articles critical of Moore, for instance—to lower Republican turnout.
The results exceeded expectations: Moore lost in a stunning upset to a first-time Democratic candidate, former US attorney Doug Jones. The margin was just 21,924 votes, out of more than 1.3 million cast. Tovo Labs boasted on Medium that it was “proud to have perhaps played a role” in the outcome, claiming that its efforts had helped drive down moderate-Republican turnout by 2.5 percent, compared to the 2014 midterms in Alabama, and suppressed conservative turnout by 4.4 percent. Whatever role the cyber ops actually played, the systematic exploitation of lax regulations on social media platforms had now not only helped win Trump the White House, but also helped send a Democrat to the Senate from Alabama. In the midterms, the race to a new bottom in American elections would be on.
Republicans had the most notable successes with disinformation campaigns in 2018, adopting practically every tactic in the Russian playbook and cooking up a few more to elude social media platforms’ half-hearted efforts to avoid a repeat of 2016—all with timely boosts from Trump and his already up-and-running 2020 reelection campaign, led by Brad Parscale, his former digital media director. As in 2016, both the beleaguered Democratic campaigns and the press were largely left flat-footed, trying to fend off a dizzying array of social media attacks. Strong, rapid-response denials from campaigns and thorough fact-checking of wild claims by media outlets were no match for the volume and velocity of disinformation that dominated the midterms.
The chief targets, not surprisingly, were progressive Democrats running in closely contested races for the House, Senate, and statewide office. None came under more sustained fire than the gubernatorial candidates Stacey Abrams, in Georgia, and Andrew Gillum, in Florida. False stories, automated and fake accounts, hoaxes, and conspiracy theories were used to paint experienced center-left officeholders as militant Black radicals with virulent socialist agendas. And this time around, it wasn’t just shadowy operators doing the dirty work. In late October, just a couple of weeks out from the election, as Gillum took part in a debate with his GOP rival, a right-wing troll named Amy Mek broadcast to her 250,000 Twitter followers: “Gillum is supported by Communist Donors, Militant Socialists, Muslim Brotherhood & PLO terrorists trying to incite a revolution in America & destroy Trump.” The same month, Wackalou Lehmann, an obscure Facebook account that has some 680 followers, posted a poorly photoshopped image of Abrams, also indicating ties to the Muslim Brotherhood, that received some 24,000 shares. The latter was easy to produce and effective: studies have shown that visual disinformation lingers long in people’s minds even if they realize the image came from an unreliable source; researchers call this the “sleeper effect,” because those exposed to an image tend ultimately to forget that it ever struck them as dubious.
In 2020, the most malicious and effective cyberattacks will be homegrown.
Toward the end of the 2018 cycle, Democratic candidates’ supposed efforts to turn out undocumented voters represented a common theme of cyber-fearmongering—one that is likely to recur in 2020. According to a study published in April by Bully Pulpit Interactive, a communications agency, 54 percent of the thousands of Facebook ads purchased early on by the Trump reelection campaign have focused on immigration. The president’s relentless hyping of the “migrant caravan” that set out from Honduras, inflamed by right-wing trolls sharing misleading images lifted from stock-photo archives, merged with older false claims of widespread “voter fraud” to inflict last-minute damage on several candidates in tight races.
None was hit harder than O’Rourke, in his unexpectedly competitive challenge to Republican senator Ted Cruz, in Texas. As Election Day approached, text messages appearing to have come from the O’Rourke campaign popped up on his supporters’ phones, asking for volunteers to drive undocumented immigrants to the polls. The texts had been sent by a Republican activist who had joined O’Rourke’s campaign and gained access to its texting software. On social media, rumors spread about the caravan being paid to vote for O’Rourke by the busload. One of the most viral of these messages was issued by Larry Schweikart, a historian and coauthor of How Trump Won: The Inside Story of a Revolution (2017), who told his fifty thousand Twitter followers that “illegals” had been caught at the Texas border, cash in hand, on their way to vote.
When BuzzFeed News reached Schweikart to ask about the provenance of this allegation, he claimed it came from FreeRepublic.com, a right-wing website, which turned out to be untrue. Was he worried about spreading disinformation? the reporter asked. “Well, it’s just a report,” Schweikart said. “Hey, fake news, right? I’m just countering what goes on on the other side.” No biggie.
Four years ago, fake-news meddling in America’s presidential race was primarily an external threat, with the Russians’ Internet Research Agency leading the way and the Trump campaign eagerly tagging along. That won’t be the case in 2020, when the most malicious and effective disinformation will be homegrown—planted and artificially amplified by for-profit troll farms, freelance cyberwarriors swapping notes in chat rooms, political parties and PACs, and campaigns up and down the ballot. As Siva Vaidhyanathan, a media studies professor at the University of Virginia, wrote in The Guardian this summer, “We won’t need Russia in 2020. We will hijack our democracy ourselves.”
The press and elected officials have the tools to decrease the effect of mass disinformation. If only they applied the lessons they should have learned.
For those who care about fair elections, healthy democratic discourse, or objective facts, hand-wringing despair is the most natural response to what we’re going to see. There is a consensus among cyber-experts when it comes to predictions for this election cycle, and they’re all ugly: Americans will be mass-bewildered by new forms of disinformation, including “deepfakes” far more convincing than that of Nancy Pelosi, House Speaker, on a faux bender. Partisans on both sides, foreign and domestic, will find slicker and harder-to-detect methods for proliferating rumors, distortions, smears, and junk news—and better ways to reach their target audiences for particular messages. Dark-money Facebook ads, aiming to direct people to propaganda sites, will multiply. The big platforms will sniff out some fake and foreign accounts and announce their actions with self-congratulatory press releases while millions more sneak through. And Trump’s campaign, already investing more in social media buys than the entire Democratic field combined, will cook up new models of creative deception, distraction, and division. It is telling that Parscale is now leading the entire Trump reelection effort, as campaign manager.
Both the press and elected officials have the tools to decrease the scope and effect of mass disinformation. “There’s no magic-wand, Harry Potter solution,” says Rosenberg, who has advised large media organizations and campaigns looking for effective countermeasures. “But we’re not powerless.” Both parties are building on a Democratic social media monitoring experiment Rosenberg led in 2018 that persuaded Twitter to remove some ten thousand accounts posing as those of progressives. (The phony accounts staged a campaign to persuade liberal men to sit out the election and make women’s votes “count more.”) The 2020 campaigns will now be doing the same. “They’re going to have to have staff monitoring social media 24-7 in order to shut down accounts and do takedowns,” Rosenberg says. “If you respond to stuff two or three days after it starts to spread, you’re going to lose the fight the way O’Rourke lost it.” Still, neither the Democratic National Committee nor its Republican counterpart has forsworn the use of dark cyber arts.
The central takeaway from two cycles of disinformation is that debunking lies once they’ve spread might be necessary—but can also serve to spread them further. Identifying the sources of malicious attacks and reporting them to social platforms (in the case of campaigns) or to the public (in the case of the press) is far more effective. Popular Information, a subscription newsletter launched last year by Judd Legum, the founder and former editor of ThinkProgress, reported in September that a popular Facebook page, “I Love America”—full of inauthentic and manipulative right-wing, pro-Trump, and “patriotic” memes—was being run and managed by a Ukrainian network. After Legum broke the story, Facebook shut the page down.
A couple months earlier, Popular Information had uncovered the truth behind one of the Trump campaign’s own deceptions: a series of video ads purporting to feature testimonials from American millennials. “I could not ask for a better president of the United States,” says a typical one, starring “Tracey from Florida,” who is seen walking along a beach. The young Americans, Legum found, were actually stock-video models for Turkish, Brazilian, and French companies. The voice-overs are fake; so are the images of small businesses in the ads. The debunking went viral, and the videos were removed from the internet. The summer brought another successful takedown, too: the second largest single buyer of pro-Trump Facebook ads (after the Trump campaign), the Epoch Times, a conspiracy-promoting paper of the Falun Gong movement, was banned after Legum and NBC News reported on its violations of Facebook rules.
And in June, the New York Times reported that JoeBiden.info, an anonymously run page that was drawing more traffic than Biden’s official campaign site, was the brainchild of Patrick Mauldin, a Trump digital staffer who’d boasted on Reddit that he wanted to “red pill” as many Democrats as possible. The site remains up, but its traffic took a dive after the flood of reports that followed in the Times’ wake.
There are, then, some encouraging signs that both campaigns and journalists have begun, three years after Russia exploited the unregulated internet to such devastating effect, to develop defense mechanisms. Cyberpropaganda thrives in the dark, obscuring both its origins and its intentions. Bringing its sources and methods into the light is the best—really, the only—disinfectant that our democracy can muster, as long as social media platforms continue to put profits ahead of the public interest. There’s no question that disinformation will get smarter in 2020. But so can the rest of us.