Sign up for The Media Today, CJRâs daily newsletter.
In the weeks since October 7, when Hamas attacked Israeli civilians, and during the bombardment and invasion of Gaza that followed, people across social media have complained about posts in support of Palestinians being restricted or removed. There have been some high-profile examples: Facebook took down the English and Arabic pages of Quds News Network, known for sharing graphic crowdsourced videos. Press outlets have also reported on individual accounts sharing relatively innocuous materialâa Palestinian-flag emoji, for instanceâgetting dinged as âpotentially offensive.â Al Jazeera, the Wall Street Journal, The Guardian, and The Intercept all found that posts and accounts have been taken down or seen their reach limited. Whether that amounts to a coordinated attempt at silencing has been difficult to prove.
Nadim Nashifâthe director of 7amleh (pronounced hamleh, as in Arabic for âcampaignâ), a nonprofit that promotes Palestinian digital rightsâhas been following excessive moderation for years. Nashif, who is fifty, monitors social networks from his office in Haifa, a mixed Palestinian and Jewish port city on Israelâs northern coast. In 2020, 7amleh published a report, âSystematic Efforts to Silence Palestinian Content on Social Media.â The team began documenting examples in 2021, when protests broke out over evictions in the Palestinian Sheikh Jarrah neighborhood of East Jerusalem, and Israel launched deadly air strikes on Gaza; 7amleh created a form through which people could submit evidence that their posts had been inappropriately restricted. âWhen you do such a thing,â Nashif said, âit means that youâre working in the favor of the powerful side and less for the weak side.â
7amlehâs findingsâfive hundred submissions in the span of about two weeksâprompted an international response. Dozens of organizations signed a letter to Meta (then Facebook) calling for greater transparency into its content moderation, including the influence of Israeli officials. Meta commissioned an independent audit. The results, published last September, praised the company for setting up a special team to monitor the situation and for focusing on content that might lead to âimminent offline harm.â But the report also concluded that the companyâs efforts bore âan adverse impactâ on the civil rights of Palestinians and advocates reliant on Facebook and Instagram to document and share information. There were examples of both Arabic and Hebrew posts being removed without violating any rules, but it was clear: Arabic posts were being moderated disproportionately.
As Meta has faced continued scrutiny over its role in spurring conflictâspreading election-related disinformation in the United States, platforming communication in the lead-up to the insurrection, fueling genocide against the Rohingyaâthe company has made changes to how it monitors content. Over time, Meta has refined algorithms to better detect the incitement of violence; recently, the company added Hebrew âclassifiersââterms used by algorithms to determine whether a piece of content violates policyâto improve its ability to flag anti-Arab hate speech. âTheyâre doing something,â Nashif said. âBut itâs not very effective.â
7amleh has found that, during periods of heightened unrest, claims of unjustified content moderation rise. Since October 7, the team has received more than fourteen hundred reports. (By comparison, 7amleh received eleven hundred claims in all of 2022.) Recent submissions include screenshots of blocked comments, hidden hashtags, and Palestine-related Instagram stories that received markedly fewer views than other posts by the same person. 7amleh will gather the data, then help people restore their content, often by making direct appeals to social media companies. âWeâre trying to tell Meta that if thereâs a situation where some people are being oppressed, social media is supposed to be their voice,â Nashif said. âEspecially when mainstream media is not giving enough of their side of the story.â
Explanations for unfair moderation vary. According to last yearâs audit, there was some degree of human error at fault: a third-party moderator added #AlAqsa to a block list, which prevented people from being able to search the hashtag, referring to a Muslim holy site in Jerusalem; the moderator had confused it with âAl Aqsa Brigade,â categorized by the US State Department as a terrorist group. Metaâs moderation system was also trained using a list of terms associated with legally designated terrorist organizationsâa category that the audit noted has âa disproportionate focus on individuals and organizations that have identified as Muslim,â making it so that Metaâs rules are âmore likely to impact Palestinian and Arabic-speaking users.â
The report did not comment on the role of Israeli government officials in Metaâs content review process. Yet communication between the companyâs moderation team and Israelâs cyber unit has been well documented. According to a report from the Israeli state attorneyâs office, the unit submitted more than fourteen thousand takedown requests to Google and Facebook in 2018, about 90 percent of which were honored. âThereâs a lot of politics about what is being enforced, how itâs being enforced, and whether itâs being enforced,â Nashif said. âMeta claims that they have standards, that they have rules that are enforced equally for everybody. And then you check that claim and you find that itâs not real.â
Has America ever needed a media defender more than now? Help us by joining CJR today.