For Big Tech, what follows ‘free expression’?

November 5, 2020

For Big Tech, free expression was always part of the business plan. 

Since the goal of any social media company is to have as much user-generated stuff as possible, and since monitoring content and deciding what to take down is both expensive and controversial, embracing free speech was always good for the bottom line. It also allowed the companies to stay out of the political fray by making the case that their hands-off approach aligned with an essential American value. 

But free expression is no longer the guiding principle at Big Tech, as the hearings before the Senate Commerce Committee last month demonstrated. In those hearings, Twitter’s Jack Dorsey, Facebook’s Mark Zuckerberg, and Google’s Sundar Pichai were compelled in the most politicized and polarized environment to explain their companies’ content-moderation policies—and, in Dorsey’s case, why Twitter put a label on a tweet from President Trump and not one from Ayatollah Khamenei. 

Regardless of what one thinks of each company’s individual policies and specific decisions, what was clear was that the companies are removing, labeling, and downgrading huge amounts of content based on their own rules, as they interpret them. As a result, there may be less disinformation, harassment, and other forms of unwanted online speech on their sites. But it also means the tech companies are exactly where they did not want to be: mired in a partisan battle in the aftermath of a brutal US election in which information—and misinformation—still may decide the outcome.

How did we get here? It’s worth reviewing the history. 

Because of its decentralized structure and the libertarian ethos of its founders, “the First Amendment is hardwired into the internet,” Danny O’Brien, now director of strategy at the Electronic Frontier Foundation, explained to me several years ago when I was researching a book on new forms of censorship. In 1996, this framework was codified in law when Congress tucked into the Communications and Decency Act a provision known as Section 230, which protects social media companies from liability when they publish or host content produced by others. These uniquely broad legal protections helped make the United States a “safe haven” for online speech and is one reason the world’s largest tech platforms continue to be based in the US. 

Sign up for CJR's daily email

But as the internet became a unitary global system, the companies found that the First Amendment was not always so popular in the rest of the world. While internet utopians believed a free and unfettered internet could liberate the world from tyranny, governments around the world had other ideas. 

China wanted political censorship, and made acquiescence part of the cost of entry into the market. Governments in the Muslim world wanted to suppress “blasphemy” and mockery of their religious beliefs (as well as criticism of their governments). Turkey wanted companies to turn over data so they could track, arrest, and prosecute coup plotters and political dissidents. Europeans, while welcoming a broad range of speech, wanted to remove hate speech and derogatory content, while also protecting privacy (for example, “the right to be forgotten”). 

Meanwhile, militant and criminal groups began to use social media platforms to communicate with their supporters, recruit followers, and terrorize communities. Governments exploited open platforms to plant disinformation, manipulate public opinion, drown out critics, and undermine trust. Social media had become Big Tech’s Frankenstein monster, wandering the world, wanting to be loved, but wreaking havoc.

The election of Donald Trump, and his use of the platforms to bypass media gatekeepers to spew disinformation and lies, brought the issue home to the United States. The free-speech consensus collapsed as Democrats began pressuring the social media companies to make judgments about the veracity of political speech. Elizabeth Warren expressed it most directly, threatening to break up Big Tech and blasting Facebook for “spreading Trump’s lies and disinformation.” 

Under pressure to remove false political advertising, Mark Zuckerberg tried once again to mount a free-expression defense, declaring during a speech at Georgetown last year, “We can continue to stand for free expression, understanding its messiness, but believing that the long journey towards greater progress requires confronting ideas that challenge us.” 

He didn’t quite get laughed out of the room, but almost. 

In the weeks preceding the election, there was a fierce debate about decisions by Twitter and Facebook to limit the circulation of a controversial New York Post article purportedly based on leaked Hunter Biden emails. Some have suggested that the companies exercised a responsible role that limited the circulation of misinformation. Others have called the decision a scandal and an affront to the First Amendment.

Regardless of which side you come down on, the companies are in an impossible position, because the most important source of misinformation is no longer a rogue state or a terrorist organization. It’s the president of the United States. And the problem cannot be solved with more advanced algorithms or armies of content moderators based in the Philippines. Instead, the companies are compelled to make judgments and navigate between Democrats who want the companies to exercise greater controls on content and the Republicans who have weaponized information. 

The first step for Big Tech is articulating a framework that accurately describes their new model. For the companies to continue to say they are guided by free-expression principles does not cut it, given how much control they are exercising over content. Perhaps the new goal should be to create an online information environment that serves the public interest. The companies should welcome a political and public debate as to how the public interest is defined. 

There is no single solution to the problem of online disinformation, but there are many interesting ideas—some that broadly target the companies’ dominant position and power, and others that are more narrowly tailored. These include using antitrust to break up the companies (see Tim Wu, The Curse of Bigness); enhancing privacy and limiting data collection as a way of disrupting the business model (see this report from Ranking Digital Rights); insisting on greater transparency and adherence to international human rights standards (see David Kaye, The Speech Police); and allowing for greater liability under Section 230 (one example is the new bill introduced by Reps. Tom Malinowski and Anna Eshoo). 

The companies have also come forward with intriguing but naturally less disruptive ideas, including private arbitration (see the Facebook Oversight Board) and Dorsey’s suggestion that, in the future, outside companies and organizations could develop their own publicly available algorithms on Twitter.

It will probably take a combination of all these approaches, plus some creative thinking and trial and error, to produce meaningful reform. That, in turn, requires a functional political system, in which give-and-take and negotiations are guided by shared commitment to the public interest and an awareness that the US, as the dominant force in a global information system, has responsibilities to people around the world. 

At the October hearing, the Senate Commerce Committee sought to put the leaders of Big Tech in the hot seat. To a certain extent, they succeeded. But, in the process, they once again highlighted their own dysfunction and their inability to guide an informed public policy debate. The best possible path out of this electoral morass is not only a clear victor in the presidential race, but the creation of a new political culture capable of managing these complex debates. Without that, information will continue to be weaponized, and the social media companies will continue to be both participants and pawns in a never-ending political battle for which we all suffer. 

RECENTLY: What the polls show, and the press missed, again

Joel Simon is the founding director of the Journalism Protection Initiative at the Craig Newmark Graduate School of Journalism.