The Media Today

The fog of war

October 19, 2023
Palestinians carry belongings as they leave al-Ahli hospital, which they were using as a shelter, in Gaza City, Wednesday, Oct. 18, 2023. (AP Photo/Abed Khaled)

On Tuesday, a blast hit the Al Ahli Hospital in Gaza, apparently killing hundreds of people including patients and other civilians who had been using the site as a shelter from Israeli missile attacks. Within minutes of the first news report on the story, accusations were flying on social media: some said that Israel was to blame, and in some cases said that they had video evidence to prove it; Israel said that the blast was the result of a failed missile launch by Palestinian Islamic Jihad, a group allied with Hamas. Amid a firehose of outrage and takes, journalists worked to try and verify—in some cases publicly and in real time—what had actually happened, wading through testimony and images from sources of varying reliability who said wildly different things at different times. 

An official Israeli account on X tweeted a video purporting to bolster its claims that Islamic Jihad was responsible, but took it down after users pointed out that its time stamp didn’t match the apparent time of the blast. Later, Israel said that its intelligence services had intercepted a conversation between two Hamas operatives referring to a failed Islamic Jihad strike, and released what it claimed was audio of the discussion. Yesterday morning, Shashank Joshi, defense editor at The Economist, said that the evidence he had seen so far was more consistent with the failed missile launch hypothesis than an Israeli strike, but cautioned that this was “NOT conclusive by any means.” (A user accused Joshi of relying on evidence provided by the Israel Defense Forces; Joshi replied that “the relevant image being analyzed, published this morning,” was actually posted by an account “thought to be associated with Hamas.”) Other analysts reached a similar conclusion, as did the US government, the White House said. But other observers remained skeptical, pointing out, for example, that the IDF has wrongly blamed Islamic Jihad in the past. At time of writing, the online debate raged on. 

Since Hamas attacked Israel on October 7, a string of incidents have challenged journalists and other professional fact-checkers; the blast at the hospital was the latest example. A document appearing to show that the Biden administration gave Israel eight billion dollars in funding turned out to have been doctored. Video footage that some said showed a Hamas soldier shooting down an Israeli helicopter was from a video game. A report on mass desertions from the IDF was said to have come from an Israeli TV station—which shut down in 2019. A video of a young boy lying in a pool of blood, surrounded by men in Israeli military fatigues, was offered as evidence of brutality—but in reality was a behind-the-scenes shot from a Palestinian movie.

As my colleague Jon Allsop noted in this newsletter last week, the tsunami of content claiming to be from the conflict has also included genuine social media posts from the combatants themselves. Distinguishing the real from the doctored has not been easy. Hamas itself uploaded a number of video clips of the initial wave of attacks, many of which, CNN reported, appeared to have been “heavily edited.” Much of this content was uploaded initially to the messaging service Telegram, the one major social network that hasn’t banned Hamas, which is a proscribed terrorist organization in a number of countries, including the US. Often, however, such content has made its way from Telegram to platforms, like Meta and X (the platform formerly known as Twitter), that have then struggled to detect it and either remove it or add context before it goes viral.

As Axios noted recently, many of the major platforms have scaled back their moderation of misinformation and other hateful and violent content over the past year. They are now scrambling to adjust to the unfolding crisis in the Middle East and to the waves of fakes and graphic imagery that have come with it. Meta said that it has developed a “special operations center” staffed with experts, including fluent Hebrew and Arabic speakers; TikTok said that it plans to add more moderators who speak those two languages. YouTube told Axios that it has removed “tens of thousands of harmful videos and terminated hundreds of channels” since the conflict began. And over at X—whose gutting of its content-moderation staff has been much discussed since Elon Musk acquired the platform last year—Linda Yaccarino, the CEO, sent leaders of the European Union a letter detailing the firm’s efforts to tackle war-related disinformation after EU policymakers opened an investigation into its hosting and distribution of such content. (This was one of the bloc’s first enforcement actions under its newly passed Digital Services Act, which I wrote about recently in this newsletter.)

Although all of the platforms have failed to some extent in their attempts to remove misinformation about the conflict, various experts have said that X has been among, if not the worst for misinformation and disinformation. In the aftermath of the initial Hamas attack, Shayan Sardarizadeh, a journalist with the BBC’s Verify service, said in a post on X that he has been fact-checking on the network for years and that there’s always been plenty of misinformation during major events, but that the “deluge of false posts” since the war broke out—many of them boosted by users with blue check marks, which were once handed out to verify the identities of public figures (including many journalists) but have become a paid-for premium feature under Musk—was unlike anything he had seen before.

Sign up for CJR's daily email

In the days that followed, Yael Eisenstat, a former senior policy official at Facebook and current vice president of the Anti-Defamation League (which Musk has accused of driving advertisers away from X), told the Washington Post that while it was hard to find anti-Semitic statements or outright calls for violence on YouTube and even Meta, it was “totally easy” to find the same on X. Mike Rothschild, a researcher focused on conspiracy theories and social media, told Bloomberg that the attack was “the first real test of Elon Musk’s version of Twitter, and it failed spectacularly,” adding that it’s now almost impossible to tell “what’s a fact, what’s a rumor, what’s a conspiracy theory, and what’s trolling.” Musk’s changes to the service haven’t just made X unhelpful during a time of crisis, Rothschild said, but have “made it actively worse.”

Justin Peden, a researcher known as “the Intel Crab,” posted on X that while news outlets with reporters on the ground in Israel and Gaza struggled to reach audiences in the aftermath of the attack, “xenophobic goons are boosted by the platform’s CEO”—a reference to a post, since deleted, in which Musk vouched for the usefulness of two accounts that have been guilty of sharing misinformation, and in some cases anti-Semitic content. Emerson Brooking, a researcher at the Atlantic Council’s Digital Forensics Research Lab, told Wired that the fact that X now shares advertising revenue with premium users based on engagement incentivizes those users to maximize view counts, irrespective of the truth. And analysts at the Center for Strategic and International Studies noted that X is very different now from what it was when Russia invaded Ukraine last year, before Musk acquired the platform. (In addition to the steps noted above, X has since stopped labeling accounts that are affiliated with Iranian, Russian, and Chinese state media, and removed headlines from all news links.)

X now has a feature called Community Notes that allows approved users to add fact-checking comments to posts on the service—but researchers specializing in misinformation say that the feature has been overwhelmed by the sheer quantity of fakes and hoaxes that need to be moderated. Ben Goggin, a deputy tech editor at NBC News, said last week that he reviewed a hundred and twenty posts on X that shared fake news and found that only 8 percent had community notes appended to them; 26 percent had suggested notes that had yet to be approved, while 66 percent had neither. And a recent investigation by Wired magazine found that Community Notes “appears to be not functioning as designed, may be vulnerable to coordinated manipulation by outside groups, and lacks transparency about how notes are approved.”

Last week, Charlie Warzel wrote for The Atlantic that Musk has turned X into “a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.” He has a point. The platform gained much of its reputation as a source of real-time, on-the-ground news during events such as the Arab Spring in Egypt in the early 2010s. But its performance during the Israeli-Hamas conflict so far shows that it has become a funhouse-mirror version of itself: a circus filled with posts that present as accurate and newsworthy, but in reality are the opposite. If misinformation creates a fog of war, X does not seem interested in dispelling it. 

Warzel’s article was headlined “This War Shows Just How Broken Social Media Has Become.” Indeed, to this broader point, the entire social media landscape—the global town square, as Warzel calls it—is now a virtual minefield. If conflicts like the current one in the Middle East are lenses through which we understand our information environment, he wrote, “then one must surmise that, at present, our information environment is broken.” One need only have followed the hospital bombing in real time to know this. At the heart of it all, lives continue to be lost.


Other notable stories:

  • For the Times, Katie Robertson traced how headlines about the hospital blast in traditional news outlets (including the Times) shifted significantly over time, highlighting “the difficulties of reporting on a fast-moving war in which few journalists remain on the ground while claims fly freely on social media.” In other news about the conflict, the Philadelphia Inquirer apologized to readers for publishing a syndicated editorial cartoon that it said reinforced “pernicious antisemitic tropes about Israeli aggression.” Dozens of papers owned by the investment firm Alden Global Capital published identical editorials urging the US media to characterize Hamas as a “terrorist” group. And on his way back from a trip to Israel, Biden checked in with the press aboard Air Force One (a rarity for him). He will deliver a prime-time speech from the Oval Office at 8pm Eastern tonight.
  • Yesterday, Russia detained Alsu Kurmasheva—an editor with Radio Free Europe/Radio Liberty, a US state-backed international broadcaster, who is a dual Russian and US citizen—on charges of failing to comply with a law that requires certain individuals and groups, including numerous reporters and news outlets, to register as “foreign agents.” Kurmasheva lives in Prague but traveled to Russia in May following a family emergency; according to RFE/RL, she was subsequently blocked from leaving the country and had her passports confiscated prior to her detention yesterday. She is expected to become the first person to be arrested under the foreign-agents law and is already the second US journalist in custody in Russia, joining the Wall Street Journal’s Evan Gershkovich.
  • In other international press-freedom news, Abdifatah Moalim Nur, a prominent cable-TV journalist in Somalia, was killed in a suicide bombing at a restaurant in Mogadishu. In happier news, Mortaza Behboudi, a journalist of French and Afghan nationality, was released from prison in Kabul following nine months behind bars on espionage and other charges; Reporters Without Borders is now working to return Behboudi to Paris. And a prominent figure associated with Polish state TV acknowledged that his employer has pumped out “propaganda” on behalf of the hard-right government, which appears to be on the way out following elections last weekend. (We wrote about Polish TV last week.)
  • Yanqi Xu, a journalist for the Flatwater Free Press in Nebraska, is speaking out after Jim Pillen, the state’s Republican governor, dismissed her reporting on a group of hog farms that he owns by pointing to her nationality. “The author is from Communist China,” Pillen said in a radio interview. “What more do you need to know?” Xu told NBC News that Pillen’s remarks fit a narrative of “othering people of Chinese descent.” Matt Wynn, the director of the group that launched the Free Press, said that he was “infuriated” as an employer, “saddened” as a believer in democracy, and “embarrassed” as a Nebraskan.
  • And Yona Roberts Golding, one of CJR’s new editorial fellows, reports on how the rise of generative artificial intelligence has opened up “a highly contested front in copyright law, including for the news media.” At least “some of the large language models (LLMs) behind generative AI tools…pulled from the copyrighted material of online publishers to train their systems.” What remains unclear “is how the inclusion of this content could affect the news publishing industry—and what individual publishers will do about it.”

ICYMI: Meg Kissinger on investigating her family’s history with mental health and discrimination

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.