Facebook has a trustworthiness score for you, but it’s top secret

When you play a game, it’s handy to have a score, so you know how you did compared to all the other players. But what if the score is one that Facebook assigns you based on your estimated “trustworthiness,” and the criteria behind the score is kept secret from you? That appears to be the case, according to a report from the The Washington Post on Tuesday. A Facebook product manager in charge of fighting misinformation (there’s a job title for the ages) told the paper that the social network has developed the ranking system over the past year as it has tried to deal with “fake news” on the platform. Every user is given a score between zero and one that indicates whether they are considered trustworthy when it comes to either posting content or flagging already posted content as fake.

It’s “not uncommon for people to tell us something is false simply because they disagree with the premise of a story or they’re intentionally trying to target a particular publisher,” product manager Tessa Lyons told the Post. The trustworthiness score is designed in part to guard against this kind of gaming of the process. Facebook also took pains to point out that there is no single “reputation score” given to users, and that the trustworthiness ranking is not an absolute indicator of a person’s credibility. It is just one measurement among thousands of behavioral clues, Lyons said, which are used to determine whether a post is legitimate and/or whether a post was flagged improperly.

The speed with which Facebook tried to reassure users that they don’t have a single reputation score isn’t surprising, given all the attention on what is happening with social activity in China. There, the government is assigning all Chinese citizens a “social credit score” based on their behavior both online and offline, including what they share via networks like WeChat (which is a little like Facebook, Instagram, Snapchat, and PayPal combined into a single app). This social credit score can then be used to determine who gets access to certain services, including schools. No one is suggesting that anything quite so dystopian is going on at Facebook, but still the idea of being assigned a secret trustworthiness score by a network that controls the information diet of more than two billion people feels a tad uncomfortable.

Related: Facebook says Zuck doesn’t hate journalists after all, and doesn’t want them to die

Facebook’s personal trustworthiness score seems like a fairly obvious spinoff of its other attempts to crack down on fake news and misinformation, in the wake of the trolling of the network during the 2016 election, and the resulting Congressional hearings. The social network announced earlier this year that it is working on a trust score for publications, based in part on surveys of users to find out which media outlets they trust and don’t trust. Presumably that data will in turn influence Facebook’s rating of users who vote for specific publications, or routinely flag their content as fake.

The big question is what else a user’s trust rating will influence. Will it help determine whether their own content is favored by Facebook’s all-powerful News Feed algorithm? And if someone flags a lot of posts from media outlets as fake even when it isn’t, does that mean their own posts will be assumed to be fake as well? Will Facebook ever share this trustworthiness score with external partners such as banks and credit companies, or the federal government? As usual with so much that goes on at Facebook and the other major web platforms, we simply don’t know, and probably never will.

Sign up for CJR's daily email

Here’s more on Facebook and its tangled relationship with both media and politics:

  • Fighting trolls: Facebook announced late Tuesday that it has removed 650 pages and accounts it says were part of a coordinated network of fake personas that showed behavior similar to that employed by the Russian troll farm known as the Internet Research Agency during the 2016 election. Some of the accounts appeared to be Russian while some were linked to Iran, the company said.
  • Facebook and hatred: Is Facebook helping to foment hatred toward immigrants? New research indicates that it might be. A study looked at thousands of anti-immigrant attacks in Germany and correlated them with various factors, and found that anti-immigrant violence was more likely in towns with higher rates of Facebook usage. This held true regardless of whether the town was large or small, wealthy or poor.
  • No more discrimination: Facebook says it is removing about 5,000 criteria from its ad-targeting options, in an attempt to cut down on potential discrimination in advertising, according to a report by BuzzFeed. Advertisers will no longer be able to automatically hide their ads from users who say they are interested in things like Passover, Evangelicalism, Native American culture, Islamic culture, and Buddhism.
  • Banking info: According to a report earlier this month, Facebook has contacted some large US banks, asking for information on the banking habits of its users, including credit-card transactions and account balances. That might seem a little troubling in the context of the social network’s trustworthiness score, not to mention the recent controversy with data leakage via Cambridge Analytica, but the company says it is not “actively seeking” banking information on its users.

 

Other notable stories:

  • Over the course of a few minutes on Tuesday afternoon, Paul Manafort was convicted on eight counts of fraud and Michael Cohen pleaded guilty to several crimes, including breaking campaign finance laws. Cohen, Donald Trump’s longtime personal lawyer and “fixer,” stated in court that he had broken campaign finance laws at the direction of Trump. “I apologize, we have some breaking news,” CNN’s Jake Tapper said at one point in the four o’clock hour. “It’s like a ‘Saturday Night Live’ skit.” The New York Times’s Michael M. Grynbaum has an overview of the coverage scramble that the bombshell breaking news set off.
  • Maya Kosoff at Vanity Fair says that by admitting in a recent interview that Twitter has a left-leaning bias, CEO Jack Dorsey essentially poured gasoline on the right’s favorite conspiracy theory, which is that the company deliberately squelches conservative content. “Dorsey effectively handed conservatives more ammunition, perpetuating the cycle that forces him to continually tiptoe around them,” she says.
  • Now that more than 400 newspapers have declared their solidarity with each other in fighting the spread of President Trump’s anti-media attitudes, what should the press do next? Melody Kramer and Betsy O’Donovan put together a list for the Poynter Institute of seven things the media should be focusing on now, including showing their work to prove its value, and sharing information with other media outlets.
  • Ruairi Casey writes for CJR about a unique publication called Kanere, which is written and published by and for residents of a large refugee camp in Kenya known as Kakuma. The paper was founded by Qaabata Boru, a Nigerian journalism student who was jailed for an article he wrote in 2005 about that country’s elections, and later fled the country to avoid persecution both for his political views.
  • In its latest analysis of the state of the news media, the Pew Research Center found that the audience for almost every major sector of the US news media fell last year, with the only significant exception being radio. Both local and network TV news fell by 7 percent, cable TV’s audience dropped by 12 percent, and digital news audiences declined by 5 percent. Circulation at US daily newspapers, meanwhile, fell by 11 percent.
  • The Solutions Journalism Network said Tuesday that it has partnered with Google to launch a feature called “Tell Me Something Good,” which will be available via Google’s smart assistant device, the Google Home, as well as on any smartphone that has the Google Assistant app installed. Users will be able to say “Tell me something good,” and have a story read to them that comes from the Solutions Journalism Network.
  • Washington Post national political correspondent Jenna Johnson found something interesting being sold at a booth stationed outside Donald Trump’s rally in Charleston, West Virginia: Books that consist solely of the president’s tweets from the first year of his presidency, bound in blue leather, being sold for $35 each. “There’s going to be one volume for each year,” a woman selling the books told Johnson. “It’s a lot of tweeting.”

ICYMI: How Trump made the Sunday shows great again

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Mathew Ingram is CJR's chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in The Washington Post and the Financial Times as well as Reuters and Bloomberg.