The Media Today

YouTube has done too little, too late to fight misinformation

April 4, 2019
 

The metaphor YouTube CEO Susan Wojcicki likes to rely on when she’s describing the Google video-sharing service is a wholesome one: it’s like a library, she says—a custodian of content provided by others, giving users an easy way to find things they might be interested in. There are disturbing and even offensive books in the library too, she points out, but we take for granted that there’s more good than bad. It’s an appealing comparison, and it certainly makes YouTube sound like a benefit to society. But is it accurate? Not really. After all, if you start reading a book about the moon, the library doesn’t automatically pull books about how the moon landing was faked and stack them beside you, urging you to read them. That’s more or less what YouTube’s algorithm does.

As criticism of YouTube as a “radicalization engine” has increased over the past few years, the company has tried hard to show that it is doing something about the supply of misinformation on its platform. In March, it blocked content from Alex Jones’s Infowars account (although the noted conspiracy theorist and health-supplement hawker has found ways around the block) and, before that, the company said it had taken steps to prevent hoaxes about the Parkland school shootings (peddled by Jones, among others) from showing up in its recommended lists. It also has a new feature that sidelines what it calls “borderline content,” where a video expresses a view that is offensive or misleading but not actually illegal, such as the growing quantity of anti-vaccination videos arguing vaccination causes disease.

But as a major feature story published by Bloomberg earlier this week points out, YouTube has known about the problems associated with its recommendation algorithm for some time, and has done little or nothing about it. According to the news service, several senior employees of the video-hosting service have left over the past year, citing the lack of action on misinformation.

ICYMI: Flores, Biden, and deciding what’s newsworthy

The sources Bloomberg spoke to, most of whom wanted to remain anonymous, say their concerns were brushed off by Wojcicki and her team time and time again. Any fears about the long-term impact of spreading misinformation were subsumed by the need to push the company’s engagement numbers higher and higher, they said. At some point during the last decade, according to one source, ”YouTube prioritized chasing profits” over the safety of its users. “We may have been hemorrhaging money” in the past, this source said. “But at least dogs riding skateboards never killed anyone.”

This is almost exactly what you’ll hear if you talk to former YouTube engineer Guillaume Chaslot about it. Until he left Google in 2013, he worked on the algorithms that drive recommendations for which videos to watch next, and he told CJR that the number one priority was to build engagement. He proposed using a variety of tools to try to counter the effect that these hoax videos were having, but management wasn’t interested. Chaslot co-founded the Algo Transparency project, which aims to inform users about how YouTube’s recommendation algorithms work. He is also an outspoken critic of a phenomenon he and other researchers call “computational propaganda,” which not only involves algorithms spreading such content, but also malicious actors gaming those mechanisms to spread their own messages. As Charlie Warzel notes in a piece for The New York Times, the central problem is the “growth at all costs” mentality that still drives a lot of what the major platforms do or don’t do.

Sign up for CJR's daily email

Here’s more on YouTube and misinformation:

  • We’re trying: The New York Times recently spoke with Neal Mohan, Chief Product Officer for YouTube, who said the company is doing everything it can to limit misinformation and offensive content. Mohan said YouTube looks at a number of signals other than just total watch time, and that extreme content doesn’t drive a higher amount of engagement than other content.
  • Rabbit holes: In a New York magazine piece, Brian Feldman took issue with Mohan’s claim that there is no so-called “rabbit hole” effect, in which YouTube leads viewers further and further towards more extreme content. This claim assumes most users choose which videos to watch, Feldman says, but recommended videos and the autoplay feature also play a significant role.
  • Easy to find: Despite the company’s repeated claims that it is trying hard to get rid of such content, it is still quite easy to find neo-Nazi content on YouTube, according to a report by Vice’s Motherboard site. Even after the service was alerted to their existence, Motherboard says YouTube just removed the ads from that content (known as demonetization) but left the videos up.
  • Radicalization: Last year, The Daily Beast spoke with a number of former neo-Nazis and other members of the far right, and many of them mentioned how they became increasingly radicalized by watching YouTube videos with extreme content. Sociologist Zeynep Tufekci called the video-hosting platform “The Great Radicalizer” in an essay for The New York Times.

 

Other notable stories:

  • The New York Times is reporting that members of Special Counsel Robert Mueller’s team say that Attorney General William Barr’s letter summarizing the team’s report is misleading. “The officials and others interviewed declined to flesh out why some of the special counsel’s investigators viewed their findings as potentially more damaging for the president than Mr. Barr explained,” write Nicholas Fandos, Michael S. Schmidt, and Mark Mazzetti.
  • The Telegraph newspaper in Britain is being paid by Facebook to run sponsored content that defends the company’s practices in areas like misinformation and online harassment, according to a report from Business Insider.
  • The government in Singapore has introduced a proposed law that would require social platforms to remove fake news and would also make the spread of false stories a criminal offense. Freedom of information advocates are concerned about the impact such laws could have on media and speech, as well as on journalism.
  • The Guardian says that it plans to have at least two million readers supporting it financially through its membership program by its 200th anniversary in 2022, and the company says it has reduced its costs by as much as 20 percent over the past three years.
  • WordPress announced an initial group of 12 newsrooms it will work with on its new business platform project called Newspack, which is supposed to provide a one-stop publishing solution for independent news websites. The group of adopters includes The Lens, The Brooklyn Eagle, and Oklahoma Watch.
  • Lee Bollinger, president of Columbia University, says the United States has a duty to investigate and potentially bring a criminal case against the killers of Saudi Arabian dissident and former Washington Post contributor Jamal Khashoggi. He says Khashoggi’s killing was “a brazen and an egregious assault against American values and against the First Amendment.”
  • Craig Silverman of BuzzFeed writes about the aging population and the risks this poses for our information ecosystem, since older users are more likely to be online targets for misinformation and hyper-partisan rhetoric.
  • WhatsApp has launched a fact-checking tip line in India in an attempt to combat fake news spread on the platform, and is working with a local fact-checking group to review text, photos, links, and video content flagged by users. But the group says the information will be used for research rather than enforcement.
  • A group of Republican legislators in Georgia proposed a law that would create a state journalism ethics board to develop and hold publications and journalists to a “canon of ethics.” The bill was sponsored by Representative Andy Welch, a lawyer who has complained in the past about biased questions from TV reporters.

ICYMI: You may hate metrics. But they’re making journalism better.

 

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.