“A determining role”? Myanmar refugees sue Facebook

Three years ago, a United Nations fact-finding mission released a report on the causes and consequences of extensive human-rights violations against the Rohingya community in Myanmar. The report recommended investigating and prosecuting military leaders for a range of violations, including genocide; hundreds of thousands of people had been displaced, and more than ten thousand tortured and killed. Among the contributing factors identified by the UN mission were hate speech and propaganda, which spread via Facebook pages and accounts, including some that were maintained by members of Myanmar’s government and its military police, known as the Tatmadaw. Although Facebook eventually took action to ban some of the most egregious examples, UN observers said these accounts and pages helped to foment violence and hatred against the Rohingya for months, if not years.

At the time, Facebook admitted that its own report on the violence in Myanmar—prepared by an independent nonprofit called Business for Social Responsibility, and released at the same time as the UN report—found “we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence.” The company apologized for not doing more to prevent violence being fueled by its platform, and promised to expand its policies and add more moderation resources in Myanmar. 

This week, in a multi-country effort, lawyers representing members of the Rohingya refugee community filed legal claims against Facebook, demanding $150 billion in compensation for the harms their people suffered as a result of the company’s inaction. In the UK, the BBC reported that a British law firm representing some of the refugees has notified Facebook that it plans to file a suit in Britain’s High Court in the new year. In a letter, the law firm argues that Facebook’s algorithms “amplified hate speech against the Rohingya people,” and that the company failed to hire moderators familiar with the political and cultural situations in Myanmar. The letter also claims that Facebook failed to “take appropriate and timely action” to remove posts or ban accounts that incited violence against the Rohingya. 

New from CJR: “Everything clicks for a different reason”: Why journalism analytics are so hard to interpret

In the US, a law firm representing Rohingya refugees filed a complaint and an application for class-action status in California, alleging that Facebook overlooked the hate speech on its platform and was “willing to trade the lives of the Rohingya people for better market penetration” in Myanmar. The claim was filed in San Mateo County, where Facebook is based, on behalf of a Rohingya refugee living in Illinois, as well as an estimated 10,000 Rohingya refugees who have settled in the US since the genocide in Myanmar began in 2012. The lawsuit asks for at least $150 billion in damages for “wrongful death, personal injury, pain and suffering, emotional distress, and loss of property.” The action says Facebook’s product is defective—a criticism that may serve as an attempt to get around Section 230 of the Communications Decency Act, which protects Facebook and other social platforms from legal liability for the content they host. The lawsuit also argues that the laws of Myanmar should apply rather than US laws—another possible counter to Section 230 protections, though it’s a gambit that at least one legal expert told Reuters was unlikely to succeed).

The US complaint relies in part on an affidavit filed with the US Securities and Exchange Commission by a former Facebook staffer, who said “Facebook executives were fully aware that posts ordering hits by the Myanmar government on the minority Muslim Rohingya were spreading wildly on Facebook,” and that the issue of hate speech directed at members of the Rohingya community “was well known inside the company for years.” The complaint says that the Myanmar military regime “employed hundreds of people, some posing as celebrities, to operate fake Facebook accounts and to generate hateful and dehumanizing content about the Rohingya,” and quotes the UN report from 2018 as saying that Facebook’s central role in the daily lives and information consumption of Myanmar citizens meant it “played a determining role in the genocide.”

Sign up for CJR's daily email

Facebook was warned by many organizations working within the country that hate speech was circulating on the platform but did nothing, the complaint states. “Despite having been repeatedly alerted between 2013 and 2017 to the vast quantities of anti-Rohingya hate speech and misinformation on its system… Facebook barely reacted,” it says. The lawsuit also quotes from an internal memo written by Andrew Bosworth, a senior executive at Facebook, in which he said, “The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good,” even if “someone dies in a terrorist attack coordinated on our tools.”

Here’s more on Facebook and genocide:

  • Monopoly: In its 2018 report, the UN said “the relative unfamiliarity of the population with the Internet and with digital platforms and the easier and cheaper access to Facebook have led to a situation in Myanmar where Facebook is the Internet.” The company had become the primary mode of communication among the public, and also the primary source of news from Myanmar authorities and media, it said. “For many people, Facebook is the main, if not only, platform for online news and for using the Internet more broadly. In a context of low digital and social media literacy, the Government’s use of Facebook for official announcements and sharing of information further contributes to users’ perception of Facebook as a reliable source.”
  • Appalled: In a statement sent to Ars Technica, a Facebook spokesman said that the company was “appalled by the crimes committed against the Rohingya people in Myanmar,” and that it has built a dedicated team of Burmese speakers, banned accounts created by the country’s armed forces, and “taken action on harmful misinformation to help keep people safe.” Facebook said it has also invested in Burmese-language technology to help it moderate content. On Wednesday, the company announced that it has banned all Myanmar military-controlled businesses from having a presence on its platforms, something the UN special mission recommended that Facebook should do in 2018.
  • Promises: During a 2018 Senate hearing, Mark Zuckerberg, Facebook’s CEO, promised the company was doing as much as it could to stem hate speech in Myanmar, including adding more local moderators. A Reuters investigation, however, found that several months after this statement, thousands of comments, images, and other posts calling for violence against the Rohingya were still being published.

Other notable stories:

  • On Tuesday, Reporters Without Borders published “The Great Leap Backwards,” a major report on the loss of press freedom in China. The 82-page document reveals an “unprecedented campaign of repression led by the Chinese regime in recent years against journalism and the right to information worldwide,” the organization writes. The report examines the regime’s tools of repression against journalists and the deterioration of press freedom in Hong Kong, “which was once a model of press freedom but now has an increasing number of journalists arrested in the name of national security.” 
  • At the opening of the virtual Summit for Democracy, Secretary of State Anthony Blinken announced that the US will provide new funding to protect reporters targeted because of their work and support independent international journalism, according to a report from CNN. Blinken said the new funds will support “reporters and news organizations that are targeted with litigation as a result of their reporting”; Blinken mentioned Dayanna Monroy, an investigative journalist in Ecuador who was targeted for her reporting on former Ecuadorian president Abdalá Bucaram Ortiz, as an example. “The fund we’re launching will support journalists like Dayanna as they defend themselves against such baseless legal efforts,” said Blinken.
  • Bloomberg reports that Twitter has a secret program aimed at high-profile users known as Project Guardian, which is designed to give them more protection from trolls and abuse. The program “includes a list of thousands of accounts most likely to be attacked or harassed on the platform, including politicians, journalists, musicians and professional athletes,” Bloomberg says. “When someone flags abusive posts or messages related to those users, the reports are prioritized by Twitter’s content moderation systems, meaning the company reviews them faster than other reports in the queue.”
  • Roy Schwartz, president and co-founder of Axios, told Digiday that the company will have $86 million in revenue this year, representing forty-percent growth over the prior year, and has been profitable for three years in a row. Despite rumors over the past year that the company would merge with The Athletic or be acquired by Axel Springer, the German media giant, Schwartz said that Axios isn’t interested in mergers or acquisitions. “It’s too early at this point to sell the business or to merge it with something that would be larger than we are,” he said.
  • Alex Paterson, a reporter with Media Matters, listened to 350 hours of The Joe Rogan Experience podcast. “Throughout his first full year streaming exclusively on Spotify, host Joe Rogan has repeatedly used his podcast to broadcast conspiracy theories, COVID-19 misinformation, and anti-trans rhetoric to millions of listeners across the globe,” wrote Paterson. In an interview with The Verge, Paterson said that Rogan has become “more emboldened to push baseless conspiracy theories and right-wing lies.”
  • Anya Schiffrin writes for CJR that European regulation of online disinformation may be a “game changer” in 2022. “The US has lagged in regulating Big Tech, failing to require more transparency in political advertising or seriously modify Section 230 in order to make the  platforms liable for the harmful content they disseminate,” Schiffrin writes. “The Europeans are trying something different. The French compare the DSA provisions to banking regulations—checking for systemic risk generally and then spot checks to see whether adequate prevention measures are in place.”
  • Report for America announced almost 70 new newsrooms as partners for its nonprofit journalism program, bringing the total number of partner newsrooms to 325. The project said it has also opened up applications for 150 new reporting positions it hopes to add next year. The newly announced newsrooms include radio stations in Colorado, Arkansas, Connecticut, and Alaska, as well as the Bay City News Foundation, the Sacramento Observer, and Washington City Paper.

ICYMI: BuzzFeed goes public with confetti, a quiz, and a staff walkout

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.

TOP IMAGE: In this photograph taken Aug. 24, 2018, Rohingya refugees wait at a U.N. World Food Programme (WFP) facility to receive food supplements for their children in Kutupalong refugee camp, Bangladesh. When nearly three quarters of a million Rohingya Muslims began fleeing violence in Myanmar to neighboring Bangladesh last year, they were met with thousands of aid and medical workers from all around the world, offering everything from bread to cholera vaccinations. For many of the newly arrived refugees, the camps provided a rare encounter with doctors and modern medicine. (AP Photo/Altaf Qadri)