Tow Center

Mapping the battleground for the next information war

May 9, 2019
 

In 1970, the Canadian cultural theorist Marshall McLuhan famously predicted that World War III, when it comes, will be “a guerrilla information war with no division between military and civilian participation”—a war waged in cyberspace, not on a defined battlefield. His prediction resonates with anyone trying to make sense of our opaque information environment, especially reporters and editors who now find themselves as drafted frontline recruits.

For journalists, this era of information warfare presents both a personal and an existential threat. It also presents a myriad of new questions about how the rules and ethics applied to journalism should change. To lean again on McLuhan: “Electrical information devices for universal, tyrannical womb-to-tomb surveillance are causing a very serious dilemma between our claim to privacy and the community’s need to know.”

According to research by Amnesty International, women across the political spectrum face frequent online harassment, and for women of color that harassment is exponentially worse. Reporters who investigate extremism have found their own information and the identities of their families posted online, and their editors and colleagues harassed. Outside the US, the troll armies of authoritarian governments swarm against critical press, as Rappler founder Maria Ressa found out in the Philippines when President Rodrigo Duterte activated a Facebook horde against her. What happens online is moving, increasingly, into the real world. As we have seen in a number of mass shootings recently, the interplay between online messaging and offline actions create new realm of danger and difficulty for reporters.

Ethical newsrooms train reporters how to keep themselves and their sources safe in physically hostile environments. The new challenge for newsrooms is extending those practices to cyberspace, as both journalists and their sources risk online and offline consequences that flow from their work. The root of the problem is well known but the solutions are underdeveloped. Companies like Facebook and Google have made billions of dollars from platform designs that make it easy to publish but difficult to detect misinformation.  

Fake news, fake accounts, bots, real accounts that look like bots, doxxing, trolling, propaganda, targeted harassment, hidden influence, and the vast umbrella of misinformation have occupied much of our media and political coverage of the past two years. And there is more to come. Artificial intelligence allows for many more automated activities which will reshape communications in a more profound way than even the mobile social web. We can scare ourselves with the possibilities of “deep fake” videos, the hyper realistic representations of people and places generated instantly to fool our senses. Facebook unveiled its war room for the upcoming European elections at its Dublin European headquarters this week. Leaving aside the fact of the public relations theater, it’s remarkable that an American company now recognizes itself as a potential single point of failure in an international election.

Every part of the news process is affected in some way by the externalities of a digital environment, from the funding models and reporting processes to hiring practices and diversity of participation. Journalism’s editorial codes and training are lagging behind reality. In a compelling essay, “The Digital Maginot Line,” misinformation expert Renee DiResta describes the problem of information security measures often fighting the “last war”:

Sign up for CJR's daily email

…in the United States, for example, we remain focused on Election 2016 and its Russian bots. As a result, we are investing in a set of inappropriate and ineffective responses: a digital Maginot Line constructed on one part of the battlefield as a deterrent against one set of tactics, while new tactics manifest elsewhere in real time…

Few newsrooms have systematic verification practices in place, and most do not have an ethics policy for the use of artificial intelligence. Recent research from the Tow Center also reveals that, of 48 newsrooms surveyed, only one had put any thought into how to archive distributed digital material. Understanding the threats to credibility, to access and to safety of ourselves and the public record is just the beginning. Putting in place security and ethical practices that make journalism more than an ineffectual trench in the coming information wars will be the challenge of a generation.

Emily Bell is a frequent CJR contributor and the director of Columbia’s Tow Center for Digital Journalism. Previously, she oversaw digital publishing at The Guardian.