Facebook has put on what amounts to a full-court press over the past several days, a move that appears to be aimed at convincing Congress it is working hard to crack down on misinformation ahead of the upcoming US midterm elections. But is it really? Tuesday’s announcement that the company shut down 32 accounts for what it calls “inauthentic behavior” sounded impressive, and the blog post describing the move was filled with colorful details. On closer examination, however, the shutdown looks like fairly small potatoes, which makes the whole thing feel more like a PR campaign than anything substantive.
For a social network that has 2 billion users uploading billions of status updates, links, and other content every day, 30 pages amount to a tiny molecule in a vast ocean of information. Even the most engaged-with event created by the entire network only got about 1,400 people saying they would attend, and most of the content the pages posted had little or nothing to do with politics.
Facebook made a point of saying that it wanted to be as transparent as possible about the steps it was taking, noting that it had shared details with Congress and with other tech companies, as well as with researchers such as the Digital Forensic Research Lab, and publishing a series of blog posts written by senior executives. And yet, this is the same company that has been repeatedly criticized by the UK government for not sharing enough information about its connections to Cambridge Analytica and that company’s use of private data. In a recent report, the UK’s commission on disinformation said:
What we found, time and again, during the course of our inquiry, was the failure on occasions of Facebook and other tech companies, to provide us with the information that we sought. We undertook fifteen exchanges of correspondence with Facebook, and two oral evidence sessions, in an attempt to elicit some of the information that they held, including information regarding users’ data, foreign interference and details of the so-called ‘dark ads’ that had reached Facebook users. Facebook consistently responded to questions by giving the minimal amount of information possible, and routinely failed to offer information relevant to the inquiry.
It’s easy to see why the social network would want to at least give the impression it is hard at work fighting misinformation and malicious behavior. The federal grilling in the aftermath of the 2016 election about the activities of the Internet Research Agency, a Russian-operated troll farm, forced CEO Mark Zuckerberg and other senior executives to embark on the 2018 Facebook Apology Tour, during which dozens of Senators and congressmen and women took turns admonishing them for allowing their platform to be used in an attempt to destabilize American democracy.
This experience was more than just embarrassing. It raised the possibility that Congress could decide to regulate the social network in a variety of unpleasant ways, up to and including limiting the protection it currently enjoys under Section 230 of the Communications Decency Act. That’s the clause which effectively gives Facebook and other social platforms immunity from prosecution for anything posted by their users.
A recent discussion paper circulated among members of Congress and the tech community by Democratic Senator Mark Warner raises that as one of a number of potential moves—along with forcing the platforms to label automated accounts, requiring them to put a price tag on the user data they collect, and implementing a privacy framework similar to the EU’s General Data Protection Regulation. The proposals have no real regulatory weight, but they are still signposts that indicate where some politicians would like to go. And Facebook would very much like to avoid some or all of those avenues if it possibly can.