Facebook announced on Tuesday it is expanding a recent test that showed users more information about the articles in their News Feed and the media entities that publish them, in the hope that doing so will make it easier for people to determine who is trustworthy and who isn’t. The test started in October in a number of US markets, and the company says it is now rolling the feature out to everyone in the US, as well as adding more sources of information. The idea, according to Facebook, is to “provide more context for people so they can decide for themselves what to read, trust and share.”
The new features are a small part of the tech giant’s attempts to fix what is widely viewed as a “fake news” problem, one that exploded into public view after Russian trolls were shown to have manipulated the network to try to influence the 2016 election. The company has said it wants to cut down on news in the feed, as well as ensure the remaining news is “high quality.” But will these tweaks have any impact on Facebook’s role in spreading misinformation? That seems unlikely. Trust is a slippery concept when it comes to news, as multiple studies have shown—people tend to believe and share the news that confirms their existing preconceptions, and Facebook plays on that instinct.
Which nobody will understand nor care about as they mash share on the next fake article that confirms their previously held beliefs. https://t.co/AQtgsJrrmg
— Jeremy C. Owens (@jowens510) April 3, 2018
Facebook maintains its research, as well that of unnamed “academic and industry partners,” shows that certain types of information help users evaluate credibility and determine whether to trust a source. So it is adding contextual links when an article is shared, including links to related articles on the same topic and stats on how often the article has been shared and where. It will also include a link to the publisher’s Wikipedia page if there is one (and indicate if there isn’t), which is something YouTube also recently said it is doing to add context to videos about conspiracy theories.
In addition to those elements, Facebook says it plans to add two new features, one that shows other recent stories published by the same outlet, and a module that shows whether a user’s friends have shared the article in question. The company is also starting to test whether users find it easier to gauge an article’s credibility if they get more information about the author: When they see an article in Facebook’s mobile-friendly Instant Articles format, some users will be able to click the author’s name and get additional info, including a description from Wikipedia if there is one. Whether any of these new features actually reduce the amount of questionable news shared on Facebook remains to be seen.
Related: The Facebook Armageddon
Here’s more on Facebook and its news and trust problems:
- Today in irony: While the social network says it wants to increase the trust people have in what they see in their News Feed, it is facing a trust crisis of its own, thanks to the news that personal information on 50 million users was acquired by a data firm with ties to the Trump campaign. Facebook recently updated its privacy settings in an attempt to show that it cares about the issue, and has taken pains to point out that the source of the data leak was plugged several years ago.
- An ultimatum: Indonesia has said it is prepared to shut down access to Facebook if there is any evidence the privacy of Indonesian users has been compromised. “If I have to shut them down, then I will do it,” Communications Minister Rudiantara told Bloomberg in an interview on Friday in the Indonesian capital of Jakarta, after pointing out the country had earlier blocked access to the messaging app, Telegram. “I did it. I have no hesitation to do it again.”
- Power move: As part of its attempts to atone for the Cambridge Analytica fiasco, Facebook recently said it is shutting off the ability of third-party data brokers to target users on the platform directly through what are called Partner Categories. But longtime digital ad exec and publisher John Battelle argues this is really about consolidating Facebook’s power over that kind of targeting.
- Fake news to blame? A study by researchers at Ohio State appears to show that belief in “fake news” may have affected the 2016 election, something that has been the subject of much debate. According to a Washington Post article on the research, about 4 percent of Democratic voters who supported Barack Obama in 2012 were persuaded not to vote for Hillary Clinton by hoax news stories, including reports that she was ill, and that she approved weapons sales to ISIS.
- Probe launched: The attorney general of Missouri has announced he is launching a probe into Facebook’s use of personal data following the Cambridge Analytica leak. Josh Hawley said he is asking the social network to disclose every time it has shared user information with a political campaign, as well as how much those campaigns paid Facebook for the data, and whether users were notified.
Other notable stories:
- During a shooting incident at YouTube’s headquarters in Palo Alto on Tuesday, the Twitter account of a YouTube product manager was hijacked and used to tweet fake news reports about the event, according to The Verge. After the hack was pointed out by a number of journalists, Twitter CEO Jack Dorsey said he was looking into it, and the fake tweets quickly disappeared.
- The Environmental Protection Agency tried to limit press access to a briefing by EPA head Scott Pruitt, but the move backfired thanks to journalists at Fox News. The agency reportedly told a TV crew from Fox about the briefing but didn’t tell the other major networks, at which point Fox let its competitors know and agreed to share reporting on the event.
- The Wall Street Journal reports that 94-year-old billionaire media mogul Sumner Redstone, founder and chairman of Viacom and CBS, won’t have much of a say in the proposed merger of the two companies because his voting power has been reduced. He also now reportedly communicates using an iPad with pre-programmed responses such as “Yes,” “No,” and “F*** you.”
- Joe Pompeo writes at Vanity Fair about what some see as a culture war taking place in the New York Times newsroom, thanks in part to growing numbers of young employees. “I’ve been feeling a lot lately like the newsroom is split into roughly the old-guard category, and the young and ‘woke’ category, and it’s easy to feel that the former group doesn’t take into account how much the future of the paper is predicated on the talent contained in the latter one,” one staffer said.
- The Reporters Committee for Freedom of the Press has released a report that looks at incidents in the US in the past year that threatened press freedom, based on the first annual assessment of data from the Press Freedom Tracker, an index that records attacks on journalists and the media. Out of 122 incidents logged by the tracker, almost half occurred at protests.