Join us
The Media Today

Tracking lost pro-Palestinian posts

November 27, 2023
TheNews2/Cover Images (Cover Images via AP Images)

Sign up for The Media Today, CJR’s daily newsletter.

In the weeks since October 7, when Hamas attacked Israeli civilians, and during the bombardment and invasion of Gaza that followed, people across social media have complained about posts in support of Palestinians being restricted or removed. There have been some high-profile examples: Facebook took down the English and Arabic pages of Quds News Network, known for sharing graphic crowdsourced videos. Press outlets have also reported on individual accounts sharing relatively innocuous material—a Palestinian-flag emoji, for instance—getting dinged as “potentially offensive.” Al Jazeera, the Wall Street Journal, The Guardian, and The Intercept all found that posts and accounts have been taken down or seen their reach limited. Whether that amounts to a coordinated attempt at silencing has been difficult to prove.

Nadim Nashif—the director of 7amleh (pronounced hamleh, as in Arabic for “campaign”), a nonprofit that promotes Palestinian digital rights—has been following excessive moderation for years. Nashif, who is fifty, monitors social networks from his office in Haifa, a mixed Palestinian and Jewish port city on Israel’s northern coast. In 2020, 7amleh published a report, “Systematic Efforts to Silence Palestinian Content on Social Media.” The team began documenting examples in 2021, when protests broke out over evictions in the Palestinian Sheikh Jarrah neighborhood of East Jerusalem, and Israel launched deadly air strikes on Gaza; 7amleh created a form through which people could submit evidence that their posts had been inappropriately restricted. “When you do such a thing,” Nashif said, “it means that you’re working in the favor of the powerful side and less for the weak side.” 

7amleh’s findings—five hundred submissions in the span of about two weeks—prompted an international response. Dozens of organizations signed a letter to Meta (then Facebook) calling for greater transparency into its content moderation, including the influence of Israeli officials. Meta commissioned an independent audit. The results, published last September, praised the company for setting up a special team to monitor the situation and for focusing on content that might lead to “imminent offline harm.” But the report also concluded that the company’s efforts bore “an adverse impact” on the civil rights of Palestinians and advocates reliant on Facebook and Instagram to document and share information. There were examples of both Arabic and Hebrew posts being removed without violating any rules, but it was clear: Arabic posts were being moderated disproportionately.

As Meta has faced continued scrutiny over its role in spurring conflict—spreading election-related disinformation in the United States, platforming communication in the lead-up to the insurrection, fueling genocide against the Rohingya—the company has made changes to how it monitors content. Over time, Meta has refined algorithms to better detect the incitement of violence; recently, the company added Hebrew “classifiers”—terms used by algorithms to determine whether a piece of content violates policy—to improve its ability to flag anti-Arab hate speech. “They’re doing something,” Nashif said. “But it’s not very effective.”

7amleh has found that, during periods of heightened unrest, claims of unjustified content moderation rise. Since October 7, the team has received more than fourteen hundred reports. (By comparison, 7amleh received eleven hundred claims in all of 2022.) Recent submissions include screenshots of blocked comments, hidden hashtags, and Palestine-related Instagram stories that received markedly fewer views than other posts by the same person. 7amleh will gather the data, then help people restore their content, often by making direct appeals to social media companies. “We’re trying to tell Meta that if there’s a situation where some people are being oppressed, social media is supposed to be their voice,” Nashif said. “Especially when mainstream media is not giving enough of their side of the story.”

Explanations for unfair moderation vary. According to last year’s audit, there was some degree of human error at fault: a third-party moderator added #AlAqsa to a block list, which prevented people from being able to search the hashtag, referring to a Muslim holy site in Jerusalem; the moderator had confused it with “Al Aqsa Brigade,” categorized by the US State Department as a terrorist group. Meta’s moderation system was also trained using a list of terms associated with legally designated terrorist organizations—a category that the audit noted has “a disproportionate focus on individuals and organizations that have identified as Muslim,” making it so that Meta’s rules are “more likely to impact Palestinian and Arabic-speaking users.”

Sign up for CJR’s daily email

The report did not comment on the role of Israeli government officials in Meta’s content review process. Yet communication between the company’s moderation team and Israel’s cyber unit has been well documented. According to a report from the Israeli state attorney’s office, the unit submitted more than fourteen thousand takedown requests to Google and Facebook in 2018, about 90 percent of which were honored. “There’s a lot of politics about what is being enforced, how it’s being enforced, and whether it’s being enforced,” Nashif said. “Meta claims that they have standards, that they have rules that are enforced equally for everybody. And then you check that claim and you find that it’s not real.”

Has America ever needed a media defender more than now? Help us by joining CJR today.

Yona TR Golding was a CJR fellow.