Innovations

Are bots entitled to free speech?

May 24, 2018
 

We need to talk about bots. How will the courts address free-expression rights for artificially intelligent communicators? This conversation is coming, and it may push the Supreme Court to do something it has avoided: define who is and is not a journalist.

For nearly half a century, the US legal system has lived a double life. On the one hand, the Supreme Court has held that journalists do not have greater or lesser rights than other citizens (see Branzburg v. Hayes). On the other, the lower courts have generally ignored or let stand numerous laws or privileges that provide journalists special protections. These include the qualified First Amendment–based reporter’s privilege in some federal jurisdictions and fee waivers in FOI statutes.

Most of these laws and privileges were devised before the Web was publicly available, and the case law is inconsistent in who it applies these protections to online. These journo-specific measures have sometimes been useful tools for citizen publishers—bloggers, message-board posters, social-media commenters—who have faced the same legal difficulties that traditional journalists have for many years: defamation and privacy claims, efforts to compel disclosure of their sources, and so on.

Over the years, state and federal courts have tried, using a variety of approaches, to define journalism and who is a journalist. State shield laws, which provide varying levels of protections to journalists from being compelled to reveal their sources, have led to many cases in which courts were pressed for such definitions. Citizen publishers have invoked shield laws, which vary significantly in their definitions of who they protect, often in recent years. The lower courts have done the hard work in this area, and so the Supreme Court has stayed out of it.

ICYMI: The porn-studio-on-Martha’s-Vineyard story that never was

So where do bots come in? Networked technologies have already challenged journalists to distinguish their work from the countless other types of information that flood virtual spaces. Now, non-human entities have the potential to muddy the waters even more.  Courts will soon have to explore whether AI communicators have rights as publishers—and whether a bot can be entitled to journalistic protections.

Sign up for CJR's daily email

Citizen publishers are a good comparison. They have fared best when their work can be understood as a public good—something generally informative or related to a matter of public concern. But courts have also considered the publishers’ practices. Was their work original? Accurate? Did it include sources? Some courts have even scrutinized personal characteristics. Did the publisher go to journalism school? Is he or she employed by a news organization?

If the courts focus on the publisher, it would be difficult for AI communicators to receive journalistic protections. If the courts focus on what was published, however, AI communicators have a better chance of succeeding, particularly if their content can be seen as a public good.

 

If a bot files FOIA requests, should it be exempt from fees?

 

While giving free speech to bots sounds shocking, a court decision in favor of an AI entity could benefit news organizations, some of which (the AP and Reuters, among them) have published AI-constructed stories for years. An example is the daily stock-market roundup. Many such stories could be understood as a public good (think news alerts) and thus receive journalistic legal protections—again, if the courts focused on what was published rather than how it was published.

These issues become more complex in the context of fake news and clickbait. Recent elections in the US, the United Kingdom, and France have seen bots flood social media with false or intentionally misleading content. That can hardly be seen as a public good.

All of this requires us to identify what is human about journalism—and what is fundamental about it. Could a bot programmer invoke a journalistic shield law to protect her program’s code, including the sources it used to construct a report, from compelled disclosure? If a bot files FOIA requests, should it be exempt from fees because it intends to scrape the data and publish it in tweets or on a blog?

These are the coming questions for the courts. And they are critical for all of us to take up, as we move into the fourth wave of networked communication: increasingly complex relationships between humans and artificially intelligent communicators.

ICYMI: Trump Twitter ruling has implications for all government officials

Jared Schroeder is an assistant professor of journalism at Southern Methodist University, where he specializes in First Amendment law. He is the author of The Press Clause and Digital Technology’s Fourth Wave.