This was the year Facebook finally faced its reckoning, not just for its role in distributing misinformation, but also for its arrogance in refusing to confront the problematic use of its platform earlier in its history.
The drumbeat started In November 2017, when the company was called before Congress to answer for its role in the activity of Russian trolls during the election. Those recriminations would only grow louder as 2017 became 2018. Whether it was avoiding hearings designed to hold the social network to account or responding to leaked emails about its data-handling policies, Facebook and its CEO Mark Zuckerberg spent most of the year on the defensive—a place some firmly believe they deserve to be.
Facebook’s Russian misadventures were the most prominent example of what researchers call “automated propaganda,” or misinformation that benefits from algorithms. But there were other examples: A former YouTube engineer told CJR about how the Google-owned video platform’s recommendation engine often provides fake news and conspiracy theories in an attempt to boost engagement. And while it may not be driven by an algorithm, Facebook-owned WhatsApp spent much of the year under fire for helping to spread conspiracy theories that led to dozens of brutal killings in India. Facebook has also been criticized by the UN for helping to fuel violence against the Rohingya in Myanmar, something that helped push the company into admitting that connecting people via social media isn’t always good.
Early in the new year, the social network was rocked by a different scandal, this one about data privacy. Thanks in part to reporting from the Guardian, it emerged that a data analytics company called Cambridge Analytica had improperly acquired personal data on more than 50 million Facebook users, thanks to a (now closed) access feature. Like the Russian troll scandal, this would also haunt the social network throughout much of the year, leading to lawsuits and eventually a hearing before the UK parliament. Meanwhile, the company continued to try and assure both politicians and the media that it was taking action to combat misinformation, including a plan to rank media outlets on the basis of trust.
The new year also saw yet another change to Facebook’s algorithm, something media companies have learned to fear (and with good reason, since those changes often involve a dramatic loss of traffic, and therefore also a loss of revenue). The change in January was designed to emphasize personal sharing, and to de-emphasize posts from professional news outlets. Did this change help some media companies wean themselves off Facebook? Perhaps. But others have seen their traffic and revenue slump as a result, and this has undoubtedly increased the pressure on some digital-media outlets to the point where several have shut down entirely, like Mic.com, which was sold in November for a fraction of its previous value.
Facebook touched off another round of media criticism when it said in May that as part of its plan to index all political advertising on the platform, it was going to include any promoted news story if it was about a political topic. This plan was roundly criticized by media outlets, including The New York Times, but Facebook remained steadfast. Towards the end of the year, however, the social network said it was doing a trial in the UK that would exclude “accredited” news organizations from the index, and eventually planned to roll that out to the US as well. Europe, meanwhile, launched the General Data Protection Regulation in May, which requires any online service to handle personal data in certain ways—a backlash in part against Facebook.
As the year drew to a close, there was one last Facebook drama, this time sparked by a UK committee looking into the company’s role in misinformation and its data handling practices in the Cambridge Analytica case. Using a little-known feature of British law, the committee forced an American businessman involved in a lawsuit with Facebook to turn over copies of email discussions and other documents, many of them featuring comments from senior Facebook executives. The committee then took the unprecedented step of publishing 250 pages worth of the files, a kind of early Christmas present for journalists.
While year-end forecasts bring with them the inevitable risk of being wrong, it seems fairly easy to predict that 2019 will see a continuation of Facebook’s ongoing struggle to convince both users and the US government that it can be trusted.
During this past year, several prominent members of Congress—including Senator Mark Warner, the ranking Democrat on the Senate Intelligence Committee—made it clear they have lost patience with Facebook. “It is beyond obvious that social media platforms are simply not up to the task of voluntarily ensuring the privacy and security of their users. Congress must step in,” Warner said on Twitter as the year drew to a close, and The New York Times reported on more Facebook data-privacy violations.
Those and other similar comments suggest regulations of some kind are on their way. Will Facebook be required to open up its algorithm, or increase the transparency around its decision making? Both are possible, as is a weakening of the protections that Facebook and other platforms have under Section 230, the “safe harbor” clause in the Communications Decency Act that makes them immune from prosecution for content posted by their users. As attention focused on Facebook’s enabling of Russian trolls and other bad actors, the number of voices in Washington calling for changes to Section 230 continued to increase.
Any of these measures could put pressure on Facebook, making it harder to maintain the growth that made it a favorite for investors for so many years, growth that had already begun to slow even before the Russian revelations. And if all of these things come to pass, the social network could close out 2019 a very different company than it was even two years ago—perhaps not any more humble, but likely weaker and less flexible.
ICYMI: The Facebook Armageddon