Tow Center

Gaming the algorithm

Social media platforms’ control of the public sphere has proved remarkably open to abuse.

September 19, 2022
Photo: Adobe Stock

This article was featured in the Tow Center for Digital Journalism’s weekly newsletter. Subscribe here to stay up to date on our latest events and publications.

The term “disinformation,” and riffs on tackling, stemming, or combating it, have become battle cries from political figures including former president Barack Obama ahead of consequential elections in Brazil and the United States.

What exactly are we talking about when we discuss disinformation? It’s often defined as deliberately false information, compared to the accidentally false sharing of misinformation. But are these helpful lenses to study the information environment? What about propaganda, which can be true or false, and where does power fit into the equation?

We spoke to Courtney Radsch, who prefers the term “information operations.” Radsch is a fellow at UCLA’s Institute for Technology, Law, and Policy and a senior fellow at the Center for International Governance Innovation. 

Sign up for CJR's daily email

You can read Radsch’s full recent paper, “Artificial intelligence and disinformation—state-aligned information operations and the distortion of the public sphere,” published by the OSCE, here.

This conversation has been edited and condensed for clarity.

 

JB: How would you explain the term “information operations” to those who might not have heard of it?

CR: Information operations refers to the manipulation of social media platforms, of our internet infrastructure, in order to promote particular types of information. They’re called operations because they’re coordinated and typically require some funding or resources to make them happen, to distinguish them from, say, organic movements and other types of engagement. There are a lot of different terms that have been used in the past several years, ranging from disinformation and misinformation, malinformation, computational propaganda, a whole range of terms. For me, I find information operations a helpful term because it doesn’t really get into the truth or falsity of the information—because that doesn’t change the dynamics of the fact that it is an information operation. Also, a lot of the current research into disinformation often doesn’t really look at the long history of propaganda, so I wanted to use the term information operations to refer back to and acknowledge this long tradition of studying and examining how propaganda is used. Again, propaganda can be true or false, or somewhere in between.

 

JB: And trying to distort the public sphere is a crucial part, right?

CR: Exactly. The idea is that in the public sphere, everyone is theoretically supposed to be equally able to have a voice and have their perspective heard. We know from long critiques of public sphere theory that that’s imperfect, but nonetheless the distortion comes from the fact that now it is such a pay-to-play system. Many of the platforms that intermediate the public sphere are driven by engagement algorithms and other forms of artificial intelligence that are subject to manipulation by actors. They can either pay for information or pay for public relations firms to manipulate the results, to play to the design of the platforms, and to create information that does well based on the logic of a given platform.

 

JB: To bring that into concrete terms, what are some specific examples of algorithms being gamed around the world, and how has that steered public opinion?

CR: The goal of distorting the public sphere has become a driving objective for political actors around the world. According to research from the Oxford Internet Institute, at least seventy countries around the world have seen computational propaganda—essentially information operations—weaponized to promote a certain point of view or agenda. The way this works is it influences framing and agenda setting, which are crucial roles that typically the news media and other institutional actors have played, or even social movements.

We know, for example, that elections all over the world are targets for information operations. We’re seeing that right now in Brazil, where disinformation has become a major challenge. We’ve seen Facebook and Google roll out new efforts to deal with that [which have been criticized]. You can see information operations in the US elections, in the midterms as well as in the 2016 elections. In Myanmar, there were information operations being promoted by the military junta that were implicated in genocide, according to the United Nations.

Then there are less broad-scale, whole-society information operations, where they’re often targeted at specific issues. You could see this in the Azerbaijan–Armenia Nagorno-Karabakh conflict in terms of trying to shape the narrative. You can see this with Maria Ressa in the Philippines, the Nobel Prize–winning journalist, and the efforts to eradicate her independent outlet, reframe the drug war—which has led to thousands of deaths, according to independent observers—and the trumped-up charges against her. That’s made it more difficult for the public to have a rational debate by [the state] controlling information. The government has tamped down alternative perspectives, like the order to shut down Rappler, and at the same time created armies of supporters online to create content and engagement that reframe the public sphere and impact opinion.

 

JB: Can you unpack how a social media algorithm might be gamed?

CR: We see a lot of cross-platform manipulation. One way is to create signals that algorithms will pick up as indicating organic popularity or engagement, because that will then lead recommendation algorithms to feature that content higher. That can be done, depending on the platform, by many people linking to it or sharing it, sometimes coordinated in closed groups. We saw this in Malta, for example, where an investigation by The Shift showed that in advance of Daphne Caruana Galizia’s murder there were secret Facebook groups that would coordinate the messaging targeting her for the day, seeking to undermine her credibility as a journalist. Currently, there is awareness among the platforms about cross-platform coordination. But there’s not yet a way to really track or identify that algorithmically. It’s really tough when that coordination is being done in private or encrypted spaces.

  

JB: You’ve touched on how reporters have been the target of (sometimes state-aligned) coordinated harassment. Maybe you could elaborate on what tactics have been used to delegitimize journalists seeking to hold power to account.

CR: Yeah, there are a range of tactics that are being used. Information operations, coordinated disinformation that seeks to target and delegitimize journalists, online harassment, which is a kind of general catchall term for a wide range of threats and attacks on journalists—including sharing of personal information, delegitimizing them as journalists, equating them with spies, and other types of smearing impinging on credibility, which is of course the lifeblood of journalism. Sometimes using manipulated images, audio, or texts—what we often call deepfakes—but honestly, a lot of times they’re pretty shallow fakes. Because it’s not necessarily about tricking people, it’s about getting people to share and meme something out.

 

JB: And obviously there’s a huge element of people from marginalized groups—women, people of color, nonbinary people—being targeted more frequently and attempts to block them out of public life. Can you say more about how that’s been weaponized?

CR: Yeah, so marginalized communities in different contexts—whether it’s Muslims in India, secularists in Bangladesh, women—are often targeted by these campaigns. You see, in the independent media, the people who are trying to speak truth to power and do independent reporting, they often are women or come from marginalized groups and are then targeted by information operations. They typically include online harassment with a variety of tactics. For example, smearing and constant denigration which draws on nationalistic tropes or cultural stereotypes, such as woman’s “appropriate” role in society. That has several functions. One, it delegitimizes them in the public sphere. Two, it refocuses the chatter, talk, and engagement to these negative interactions, and diverts attention away from the original investigation or reporting. And three, by distorting the public conversation, it makes it harder to hold governments accountable.

 

JB: You’ve written how this rise in state-aligned information operations has coincided with a decline in independent media over the past couple of decades. Can you elaborate on why that is particularly worrying?

CR: The basic fundamentals of our information ecosystem are made up of various data points. So content is a really important part of the basis for these AI systems. If you have a decline in independent journalism, you have a decline in independent information unaffiliated with a political party or viewpoint. Meanwhile, you have a rise in the production and creation of propaganda and weaponized information. That’s creating an imbalance in the very content that is used to train and fuel the machine learning that is at the basis of these systems. That’s on top of the more obvious dangers of a decline in reporting—less beat reporting, less investigative reporting, fewer members of the public who have ever met a journalist, and all the negative repercussions about accountability and an independent check on power.

 

JB: One tactic you’ve looked at is “news washing,” which the Tow Center studies, as a way of spreading particular messages disguised as neutral news reporting. Maybe you could talk a bit more about that method.  

CR: Essentially, it is creating information that looks like news, but it’s more likely affiliated with a particular political viewpoint or party. News washing is increasingly prevalent. Because there are also protections for news on some platforms and in legislation. It’s a real challenge. You’ll see this progression from the obscure corners of the internet into obscure news-identifying outlets, and then that picked up by partisan journalists or news outlets or others in the public sphere. By making it look like information came originally from a news site, it gets a veneer of legitimacy that enables other news sites to cover it and politicians to refer to it, when in fact it may have been planted in the first place.

 

JB: Turning to social media companies, do you think Facebook and others understand the scale to which their platforms are open to manipulation? 

CR: Oh yes, there’s no question in anyone’s mind about how open to manipulation these platforms are. Especially in languages other than English. [Facebook whistleblower] Frances Haugen revealed 87 percent of Facebook’s global budget for time spent on classifying misinformation goes to the US. That’s really problematic. There are over seven thousand languages in the world. In Ethiopia, a huge country, most of the languages are not supported by the platforms, but are also critical parts of the public sphere. The same in India. That’s one of the biggest issues, and the companies know there is a huge problem. But the resources to address it can’t just come after the fact. It needs to come at a more fundamental level. For example, what type of data are they doing machine learning or training on? The corpus of data and languages used to train models impacts how effective they are algorithmically.

 

JB: So if the platforms understand the scale of the problem, what could they do to get a grip on it?

CR: Well, first there are many different types of platforms. So like Facebook, Google—the social media content-focused platforms—do have content moderation resources. However, smaller platforms—potential competitors or others like domain name registrars or ISPs—typically do not have content moderation resources. 

Second, it can’t just be about moderation. Paid advertising and amplification should be much more open to analysis. Because we know Trump’s campaign had tens of thousands of different versions of a Facebook ad that it was testing, and you have these so-called “black PR firms” or “moderation mercenaries” who are out to manipulate platforms to promote their information. There needs to be much more transparency.

Separately, the wealthy platforms, who dominate the content-based public sphere, need to devote more resources to basic research and creating the corpus of data needed to function in multiple languages. Similarly, platforms need to do more, particularly in non-Western contexts, to make sure that there are reliable signals of trustworthy information. So it’s not just about getting rid of the bad information, but how do you then elevate better information? 

 

JB: Finally, as we head toward the midterms, what are you expecting?

CR: I’m expecting that it’s going to be another example of where the social media public sphere becomes a place of disinformation, hate, and fear. I don’t think that really much has been done. And so much of the United States’ content moderation debate has become politicized that it’s really difficult for the platforms to act. In part because of the broader political dynamic: you have this inaccurate perception that Republican or right-wing content is more likely to be actioned than left-wing content. That’s not true—some of the most popular sites are right-wing—but that perception continues, and the platforms have not successfully debunked that.

About the Tow Center

The Tow Center for Digital Journalism at Columbia's Graduate School of Journalism, a partner of CJR, is a research center exploring the ways in which technology is changing journalism, its practice and its consumption — as we seek new ways to judge the reliability, standards, and credibility of information online.

View other Tow articles »

Visit Tow Center website »