The Media Today

The scientific process, and how to handle misinformation

January 26, 2022
 

After Donald Trump was elected, in 2016, misinformation—and its more toxic cousin, disinformation—began to feel like an increasingly urgent social and political emergency. Concerns about Russian trolls meddling in American elections were soon joined by hoaxes and conspiracy theories involving covid-19. Even those who could agree on how to define mis- and disinformation, however, debated what to do about the information itself: Should Facebook and Twitter remove “fake news” and disinformation, especially about something as critical as a pandemic? Should they “deplatform” repeated disinfo spreaders such as Trump and his ilk, so as not to infect others with their dangerous delusions? Should federal regulations require the platforms to take such steps?

After coming under pressure, both from the general public and from President Biden and members of Congress, Facebook and Twitter—and, to a lesser extent, YouTube—started actively removing such content. They began by banning the accounts of people such as Trump and Alex Jones, and later started blocking or “down-ranking” covid-related misinformation that appeared to be deliberately harmfulIs this the best way to handle the problem of misinformation? Some argue that it is, and that “deplatforming” people like Trump—or even blocking entire platforms, such as the right-wing Twitter clone Parler—works, in the sense that it quiets serial disinformers and removes misleading material. But not everyone agrees.

The Royal Society, a scientific organization based in the United Kingdom, recently released a report on the online information environment in which it states that “censoring or removing inaccurate, misleading and false content, whether it’s shared unwittingly or deliberately, is not a silver bullet and may undermine the scientific process and public trust.” Frank Kelly, a professor at the University of Cambridge and the chairman of the report, wrote that the nature of science includes uncertainty, especially when it is trying to deal with an unprecedented medical crisis like the pandemic. “In the early days of the pandemic, science was too often painted as absolute and somehow not to be trusted when it corrects itself,” Kelly wrote, “but that prodding and testing of received wisdom is integral to the advancement of science, and society.”

Listen: Russia, Ukraine, and the front lines of information warfare

Early last year, Facebook and other social platforms said they would remove any content that suggested the virus that causes covid-19 came from a laboratory, since this was judged to be harmful misinformation. Later, however, a number of reputable scientists said the possibility couldn’t be ruled out. Facebook and other platforms were forced to reverse their initial policies. Blocking or removing content that is outside the scientific consensus may seem like a wise strategy, but it can “hamper the scientific process and force genuinely malicious content underground,” Kelly wrote, in a blog post published in conjunction with the Royal Society report.

The report notes that, while misinformation is commonplace, “the extent of its impact is questionable.” After surveying the British public, the Royal Society concludes that “the vast majority of respondents believe the covid-19 vaccines are safe, that human activity is responsible for climate change, and that 5G technology is not harmful.” In addition, the report states that the existence of echo chambers “is less widespread than may be commonly assumed, and there is little evidence to support the filter bubble hypothesis (where algorithms cause people to only encounter information that reinforces their own beliefs).”

Sign up for CJR's daily email

What should platforms like Facebook do instead of removing misinformation? The report suggests that a more effective approach is to allow it to remain on social platforms with “mitigations to manage its impact,” including demonetizing the content (by disabling ads, for instance) or reducing distribution by preventing misleading content from being recommended by algorithms. The report also suggests that adding fact-checking labels could be helpful, something that both Facebook and Twitter have implemented, although there is still some debate in research circles about whether fact-checks can actually stop people from believing misinformation they find on social media.

Here’s more on misinformation:

  • Section 230: In November, the Aspen Institute’s Commission on Information Disorder—which included Katie Couric, a former news anchor, and Prince Harry, the Duke of York—released its final report, which contained fifteen recommendations to stamp out misinformation. Those recommendations include financial support for local journalism and other sources of accurate information, such as libraries. The commission also recommended changes to Section 230, a clause in the Communications Decency Act that protects platforms such as Facebook and YouTube from legal liability for the content they host. The report recommended that the clause be amended to “withdraw platform immunity for content that is promoted through paid advertising and post promotion.”
  • Types: The Royal Society report defines several groups that tend to share misinformation, with different motives. “Good Samaritans” often unknowingly produce and share misinformation because they believe it to be true; “Profiteers” distribute it because they generate revenue from the content somehow; “Coordinated Influence Operators” are trying to sway public opinion for a political purpose; and “Attention Hackers” are engaged in what is often called “trolling”—they share misinformation because they enjoy creating chaos and/or attracting attention.
  • Research: One of the report’s recommendations is that social media platforms should “establish ways to allow independent researchers access to data in a privacy compliant and secure manner.” As the Royal Society notes, platforms such as Facebook and Twitter have promised to provide data to researchers, but have dragged their feet in doing so, and in Facebook’s case have disabled the accounts of scientists doing research on the platform. I recently spoke with Nate Persily, a social scientist and former cochair of Social Science One, a research partnership with Facebook, which he quit after it failed to produce much usable data. He has helped draft legislation that would force the platforms to provide research and allow researchers to access their services.

Other notable stories:

  • Natalie Mayflower Sours Edwards, a government official who blew the whistle on money laundering at a number of Western banks, was released from federal prison on Monday night, after spending several months there for disclosing government documents to a journalist, according to BuzzFeed News. Edwards, who will be on probation for three years, was a key source for the FinCEN Files, an investigative series produced by more than a hundred news organizations working in partnership to uncover financial wrongdoing.
  • During a Monday briefing with President Joe Biden, Peter Doocy, a White House correspondent for Fox News, asked a question about inflation that Biden responded to sarcastically, before muttering “stupid son of a bitch” into the microphone. Doocy said the president later called him to apologize for his language, telling him it was “nothing personal,” the New York Times reported. “I made sure to tell him that I’m always going to try to ask something different than what everybody else is asking,” Doocy later told Fox News’s Sean Hannity about his call with Biden, “and he said, ‘You’ve got to.’ ”
  • Politico reports that Grid, a news startup that just launched earlier this month, has “ties to a global consulting firm best known for its crisis communications management and lobbying work on behalf of foreign governments, most notably the United Arab Emirates.” The report says that a member of Grid’s board, former CNN journalist John Defterios, is a senior adviser at APCO, a communications firm that has done work for the UAE, and also represents International Media Investments, a UAE-based investment fund that invested in Grid’s initial funding round. Laura McGann, cofounder of Grid, told Politico that APCO played no role in launching Grid, and that it is “an independent news organization, free from the influence of investors, sponsors and advertisers.” 
  • Ahead of this year’s Winter Olympics, to be held in Beijing, Republican leaders on the House Energy and Commerce Committee have sent a letter to NBCUniversal expressing their concern about “the extent of influence the Chinese Communist Party may have over NBCUniversal’s coverage of the games,” according to a report from Axios. “The letter, addressed to NBCUniversal CEO Jeff Shell and NBC Olympics President Gary Zenkel, asks NBC how it plans to use its ‘investment in the Games to shed light on China’s history of human rights abuses.’ ” Axios said the committee members also want to know whether, as part of NBC’s rights to broadcast the Games, the network is “in any way precluded by the IOC or CCP from coverage that would be critical.”
  • Substack plans to launch a built-in video player in an effort to draw new creators to its newsletter-publishing platform, a spokesperson confirmed to Axios. “The new native video embed will allow Substack creators to upload or record a video onto a Substack post directly,” Axios reported. “In the past, creators had to embed videos from other sites like YouTube in their newsletters or blog posts. Writers can choose to make videos available to everyone or only paid subscribers.… The videos will be playable within Substack posts online. If a creator wants to include a video in an email, they can embed it as a clickable image.”
  • The New York Times profiles Fabrizio Romano, an Italian journalist who has become the go-to source for news and information about player transfers in the international soccer market: “A transfer has not happened until it bears Romano’s imprimatur.” The Times writes that Romano’s power is now so great that he has “made the leap from being merely a reporter covering soccer’s transfer market to something closer to a force within it. And in doing so, he has blurred the line between journalist and influencer, observer and participant.”
  • And Vice reports that Fight Club, the popular 1990s film starring Brad Pitt and Edward Norton, has a very different ending in the version currently streaming on a video service in China. The original movie, based on a book by Chuck Palahniuk, ends with the protagonist watching as a number of buildings implode, signifying that his attack on consumer society has begun. In the Chinese streaming version, however, the movie ends before the buildings are destroyed, and a message on screen says that “the police rapidly figured out the whole plan and arrested all criminals,” and “after the trial, Tyler was sent to a lunatic asylum.”

ICYMI: Omicron, false dichotomies, and the ‘new normal’

 

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.