Analysis

Twitter’s bot problem isn’t going away, and Virginia election is Exhibit A

November 7, 2017
Image by Greyweed via Flickr.

Most of us are familiar with the idea that fake and automated Twitter and Facebook accounts, many of them run by trolls linked to the Russian government, created and amplified misinformation in an attempt to interfere with the 2016 election. But this wasn’t just a one-off: Bots continue to try to influence public opinion in all kinds of ways.

To take one of the most recent examples, there is some evidence that automated Twitter accounts have been distributing and promoting controversial race-related content during the gubernatorial race in Virginia, which is underway today. According to a study by Discourse Intelligence, whose work was financed by the National Education Association, the effort utilized more than a dozen either partially or fully automated bots.

ICYMI: The story behind “one of the best reported pieces of the year”

The activity promoted a video advertisement produced by the Latino Victory Fund, which shows a child having a nightmare in which a supporter of Republican candidate Ed Gillespie chases immigrant children in a pickup truck decorated with a Confederate flag. The study said the accounts had the potential to reach more than 650,000 people.

The Gillespie campaign said the ad promoted the idea that the candidate was anti-immigration, and the Latino Victory Fund (operating independently of Democratic candidate Ralph Northam’s campaign) eventually took it down. But automated accounts continued to distribute it. It’s not clear from the report whether this activity was coordinated or, if so, by whom.

One of the biggest problems with bot-promoted messages is that if the promoted message becomes loud or persistent enough, it is often picked up by traditional media outlets thirsty for traffic-generating clickbait. This can exacerbate the problem by giving it an air of legitimacy.

If the promoted message becomes loud or persistent enough, it is often picked up by traditional media outlets thirsty for traffic-generating clickbait. This can exacerbate the problem by giving it an air of legitimacy.

Sign up for CJR's daily email

In one prominent case last year, a fake and largely automated Twitter account belonging to someone who pretended to be a Trump-loving thirtysomething named Jenna Abrams, was widely quoted not just on right-wing news sites such as Breitbart or on conservative-leaning networks like Fox News, but in plenty of other places as well, such as USA Today and even The Washington Post.

In each of these kinds of cases, the lifecycle or trajectory of such bits of misinformation reinforces just how fragmented and chaotic the media landscape has become: Misinformation from notorious troll playgrounds like 4chan or Reddit makes its way to Twitter and/or Facebook, gets promoted there by both automated accounts and unwitting accomplices, and then gets highlighted on news channels and websites.

Mainstream media outlets like Fox News, for example, helped promote the idea that “antifa,” or anti-fascist, groups were planning a weekend uprising in an attempt to overthrow the US government, an idea that got traction initially on Reddit and 4chan and appears to have been created by alt-right and fake news sites such as InfoWars.

 

 

After the Texas church shooting this weekend, tweets from alt-right personality Mike Cernovich—who also was instrumental in promoting the so-called “Pizzagate” conspiracy theory that went viral during the 2016 election—were highlighted in Google search (in the search engine’s Twitter results “carousel” that appears at the top of search results).

The tweets contained misinformation about the alleged shooter’s background, including reports that he was a member of an antifa group and had recently converted to Islam.

Google has come under fire—and deservedly so—for a number of such cases, including one in which a misleading report from 4chan appeared at the top of search results for information on the mass shooting in Las Vegas. The company apologized, and senior executives have said privately that they are trying to avoid a repeat of such behavior.

ICYMI: “She identified herself as a reporter. He then walked behind her and punched her”

The search giant got off relatively easily at the hearings on November 1 before both the Senate and the House Intelligence Committees, with most of the criticism and attention focused on the behavior of social networks like Twitter and Facebook. And while Google might argue that it’s Twitter’s fault if misinformation is promoted by trolls during the election, it’s also Google’s problem if those results show up high up in search.

Google staffer Danny Sullivan admitted in a Twitter thread on Monday that the search giant needs to do more to stop such content from showing up, but he said the problems involved are complex and will take time to solve.

 

 

The major tech platforms all say they are doing their best to make headway against misinformation and the fake and automated accounts that spread it, but critics note that, until recently, those platforms denied much of this activity was even occurring. Facebook, for example, initially denied that Russian-backed accounts were involved in targeting US voters.

At the Congressional hearings, representatives for Google, Facebook, and Twitter all maintained that fake and automated activity is a relatively small part of what appears on their networks. But some senators were skeptical, and with good reason.

While Google might argue that it’s Twitter’s fault if misinformation is promoted by trolls during the election, it’s also Google’s problem if those results show up high up in search.

Twitter, for example, reiterated to Congress the same statistic it has used for years, which is that bots and fake accounts represent approximately 5 percent of the total number of users, or about 15 million accounts. But researchers have calculated that as much as 15 percent of the company’s user base is made up of fakes, which would put the total closer to 50 million.

Whether any of this activity is actually influencing voters in one direction or another is harder to say. Some Russian-influenced activity during the 2016 election appeared to be designed to push voters toward a particular candidate, but much of it—as described in Facebook’s internal security report, released in April—seemed designed to cause general chaos and uncertainty, or to inflame political divisions on issues like race.

It’s also difficult (if not impossible) to say exactly how much of this was organized by malicious agents intent on disrupting the election, and how much of it was simply random bad actors.

The Internet Research Agency, a Kremlin-linked entity that employed a “troll army” to promote misleading stories during the election, is the most well-known of the organized actors employing these methods. But there are undoubtedly more, both within and outside Russia, and all three of the tech giants admitted at the Congressional hearings that they have only scratched the surface when it comes to finding or cracking down on this kind of behavior.

ICYMI: NYTimes op-ed makes a very important point about the word “collusion”

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.