Innovations

YouTube wants the news audience, but not the responsibility

March 19, 2018
Image: Pixabay

After coming under fire for promoting fake news, conspiracy theories, and misinformation around events like the Parkland school shooting, YouTube says it will take a number of steps to fix the problem. But the Google-owned video platform still seems to be trying to have its cake and eat it too when it comes to being a media entity.

This week at the South by Southwest conference in Texas, YouTube CEO Susan Wojcicki said that the site plans to show users links to related articles on Wikipedia when they search for topics known to involve conspiracy theories or hoaxes, such as the moon landing or the belief that the earth is flat.

Given the speed with which information moves during a breaking news event, this might not be a great solution for situations like the Parkland shooting, since Wikipedia edits often take awhile to show up. It’s also not clear whether this will have any impact on users’ propensity to believe the fake content that they see on YouTube.

In addition to those concerns, Wikimedia (which runs Wikipedia) said no one from Google notified the organization of YouTube’s plan. And some who work on the crowdsourced encyclopedia have expressed concern that the giant web company—which has annual revenues in the $100 billion range—is taking advantage of a non-profit resource, instead of devoting its own financial resources to the problem.

ICYMI: Paul Ford on the intersection of blockchain and journalism (podcast)

At SXSW, Wojcicki said, “If there’s an important news event, we want to be delivering the right information,” but then quickly added that YouTube is “not a news organization.” In other words, Google wants to benefit from being a popular source for news without having to assume the responsibilities that come with being a media entity.

Sign up for CJR's daily email

This sounds a lot like the argument that Facebook has made when criticized for spreading fake news and misinformation—namely, that it is merely a platform, not a media outlet, and that it doesn’t want to become “an arbiter of truth.”

Until recently, Facebook was taking most of the heat on fake news, thanks to revelations about how Russian trolls used the network in an attempt to influence the 2016 election. At Congressional hearings into the problem in November, where representatives from Facebook, Google, and Twitter were asked to account for their actions, Facebook took the brunt of the questions, followed closely by Twitter.

At the hearing, Google argued that it’s not a social network in the same sense as Facebook and Twitter, and therefore doesn’t play as big a role in spreading fake news. But this was more than a little disingenuous, since it has become increasingly obvious that YouTube—which relies on social sharing in much the same way Facebook does—has played and continues to play a significant role in spreading misinformation.

Following the mass shooting in Las Vegas last October, fake news about the gunman showed up at the top of YouTube searches about the incident, and after the Parkland shooting, YouTube highlighted conspiracy theories in search results and recommended videos. At one point, eight out of the top 10 results for a search on the name of one of the students either promoted or talked about the idea that he was a so-called “crisis actor.”

After this was pointed out by journalists and others on Twitter, the videos started disappearing one by one, and by the following day, there were no conspiracy theories in the top 10 search results. But in the meantime, each of those videos got thousands or tens of thousands of views.

Public or media outrage seems to have pushed Google to take action in the most recent cases. But controversial content on YouTube has also become a hot-button issue inside the company, in part because advertisers have raised a stink about it, and that kind of behavior has a very real impact on Google’s bottom line.

Google has argued it doesn’t play as big a role in spreading fake news as Facebook or Twitter. This is more than a little disingenuous.

Last year, for example, dozens of major-league advertisers—including L’Oreal, McDonald’s, and Audi—either pulled or threatened to pull ads from YouTube because they were appearing beside videos posted by Islamic extremists and white supremacists. Google quickly apologized and promised to update its policies to prevent this from happening.

The Congressional hearings into Russian activity also seem to have sparked some changes. Both the Senate and House of Representatives hearings scrutinized the fact that Russia Today—a news organization with close links to the Russian government—was a major user of YouTube.

Google has since responded by adding warning labels to Russia Today and other state broadcasters to note that they are funded by governments. This move has not come without controversy, however: PBS complained that it got a warning label, even though it is funded primarily by donations and only secondarily by government grants.

As well-meaning as they might be, warning labels and Wikipedia links aren’t likely to solve YouTube’s misinformation problem, because it’s built into the structure of the platform, as it is with Facebook and the News Feed. Social networks have an economic interest in fuelling this process, in part because it keeps users on the platform.

In a broad sense, fake news is driven by human nature. Conspiracy theories tend to be much more interesting than the truth, hinting at the secrets of a select few. On social networks, recommendation algorithms effectively turn this desire into a kind of vicious circle.

On YouTube, for example, the algorithm tracks people clicking and watching conspiracy theory videos, sees that the content is popular, and moves those videos higher in the recommended rankings. That in turn shows the video to more people, which makes them look even more popular, and so on.

The result is that users are pushed toward more and more controversial or polarizing content, regardless of the topic, as sociologist Zeynep Tufekci described in a recent New York Times essay. The platform, she says, has become “an engine for radicalization”:

In effect, YouTube has created a restaurant that serves us increasingly sugary, fatty foods, loading up our plates as soon as we are finished with the last meal. Over time, our tastes adjust, and we seek even more sugary, fatty foods, which the restaurant dutifully provides. When confronted about this by the health department and concerned citizens, the restaurant managers reply that they are merely serving us what we want.

When Guillaume Chaslot, a programmer who worked at Google for three years, noticed this phenomenon while working on the YouTube recommendation algorithm, he was told that what mattered was that people spent a lot of time watching videos, not what kind of videos they were watching.

Former YouTube Head of Product Hunter Walk said on Twitter recently that at one point he proposed bringing in news articles from Google News or even tweets to run alongside and possibly counter fake news or conspiracy theories, rather than taking them down, but that proposal was never implemented—in part because Google executives made it clear that, at the time, growing Google+ was a more important goal than fixing YouTube.

In addition to adding Wikipedia links, Google has also promised to beef up the number of moderators who check flagged content, and has created what it calls an “Intelligence Desk” to find offensive content much faster. And it plans to tweak its algorithms to show more “authoritative content” around news events. One problem, however, is it’s not clear how the company plans to define “authoritative.”

The definition of what’s acceptable also seems to be in flux even inside the company. YouTube recently said it had no plans to remove a channel called Atomwaffen, which posts neo-Nazi content and racist videos, and that the company believed adding a warning label “strikes a good balance between allowing free expression and limiting affected videos’ ability to be widely promoted on YouTube.”

After this decision was widely criticized, the site removed the channel. But similar neo-Nazi content reportedly still remains available on other channels. There have been reports that videos from Infowars, the channel run by alt-right commentator Alex Jones,  have been removed, and that the channel may be shut down completely, although YouTube denies this. At the same time, other controversial channels have been reinstated after YouTube said that they were removed in error by moderators.

Facebook and YouTube both say they want to be a central conduit for news and information, but they also say they don’t want to be arbiters of truth. And their proposed solutions to the fake news problem have been lackluster and half-hearted at best. How long can they continue to have it both ways?

ICYMI: Facebook admits connecting the world isn’t always a good thing

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.