The Media Today

Facebook’s new data-sharing plans raise old concerns

May 26, 2022
Seen on the screen of a device in Sausalito, Calif., Facebook CEO Mark Zuckerberg announces their new name, Meta, during a virtual event on Thursday, Oct. 28, 2021. Zuckerberg talked up his latest passion -- creating a virtual reality "metaverse" for business, entertainment and meaningful social interactions. (AP Photo/Eric Risberg)

On Monday, Meta, the parent company of Facebook, said that it plans to share more data about political ad targeting on its platform with social scientists and other researchers, as part of what the company calls its Open Research and Transparency project. According to CNN, Meta will provide “detailed targeting information for social issue, electoral or political ads” on the platform to “vetted academic researchers.” Jeff King, Meta’s vice president of business integrity, said in a statement that the information could include the various categories of interest—environmentalism, say, or travel—used to target users. The New York Times reported that, starting in July, the company’s publicly available advertising library will include a summary of this targeting information, including general location (e.g., “people who live in Pennsylvania”). By sharing the data, King said in his statement, Meta hoped “to help people better understand the practices used to reach potential voters on our technologies.”

Monday’s announcement suggests that Meta wants to be transparent about its ad targeting and other data-related practices. Researchers who have dealt with the platform in the past, however, tell a different story. Some say they have spent years trying to get Meta to provide even the smallest amount of useful information for research purposes, but even when the company does so, either the data is incomplete—Meta admitted last year that it provided researchers with faulty data, omitting about 40 percent of its user base—or the restrictions placed on its usage are too onerous. In either case, researchers say, the resulting research is almost useless. 

In some cases, Meta has shut down potentially promising research because researchers’ processes didn’t comply with the company’s rules. Last August, Meta blocked an NYU research effort called the Ad Observatory, part of the Cybersecurity for Democracy Project, saying that the group was using a browser extension to “scrape” information from Facebook without the consent of users. In addition to blocking the research group from getting any data, the company shut down the researchers’ personal accounts. Laura Edelson, a postdoctoral researcher at New York University who worked on the project, and Damon McCoy, an associate professor of computer science and engineering at NYU, wrote in Scientific American that Facebook wants people to see it as transparent, “but in reality, it has set up nearly insurmountable roadblocks for researchers seeking shareable, independent sources of data.” (Edelson also talked with CJR last September about the shutdown of her research and the implications for social science.)

ICYMI: The Times, Haiti, and the treacherous bridge linking history and journalism

Critics of the company’s legacy data-sharing practices include Nathaniel Persily, a law professor at Stanford who cofounded and cochaired Social Science One, a highly touted data-sharing partnership with Facebook that Persily said he resigned from in frustration. In a discussion on CJR’s Galley platform, and in an essay written with a colleague and published by the Brookings Institution in December, Persily described how the process of trying to extract information from Meta through the Social Science One partnership was so frustrating and time-consuming that he eventually became convinced the only way forward was to force Meta to provide data by applying legislative pressure. This led to the creation of the Platform Accountability and Transparency Act, which was put forward in December by senators Chris Coons, Rob Portman, and Amy Klobuchar. Persily explained that the bill would compel Meta and other platforms to share data with researchers, would protect those researchers from liability for the use of the data, and would force companies to “make public certain non-private data related to content, advertising, and algorithms.”

Interest in research about Facebook and its impact on society intensified last October with the release of the Facebook Papers, a cache of internal documents that Frances Haugen, a whistleblower and former Facebook project manager, leaked to the media. Some of the documents suggested that Facebook’s researchers knew about the platform’s ill effects but that senior management chose not to address them. Susan Benesch, founding director of the Dangerous Speech Project, wrote in The Atlantic that one takeaway from the Facebook Papers was that the company should be compelled to release more data to external researchers. “If Facebook employees continue to be the only ones who can monitor Facebook, neither they nor anyone else can make sufficient progress in understanding how toxic content thrives on social media platforms, how it leads to human suffering, and how best to diminish that suffering,” Benesch argued.

Sign up for CJR's daily email

As Persily explained in his Brookings essay, persuading Facebook or Meta to share data with outsiders was difficult even before 2018, but it became exponentially more so after controversy arose over the practices of Cambridge Analytica, a consulting firm that acquired personal data on millions of Facebook users—data that originally came from a researcher who had been given access to the platform. The scandal “further chilled any platform efforts to make it easy for outside researchers to gain access to individual-level content,” Persily wrote. It also resulted in a $5 billion settlement with the Federal Trade Commission, in which Facebook promised to exercise better care over the personal information of its users. Meta used this settlement as rationale for shutting down Laura Edelson’s NYU research project, saying the data scraping breached the company’s commitment to protect data. That justification failed to hold water, however; the FTC responded that the settlement didn’t prevent Meta from sharing data with researchers.

These days, Cambridge Analytica may be more present in the minds of Meta executives. Also on Monday, Karl Racine, attorney general for the District of Columbia, said he is suing Mark Zuckerberg, Meta’s CEO, for failing to prevent personal information on Facebook users from being misused by the firm. The lawsuit alleges Zuckerberg oversaw the discussions that led to the data breach; in a statement, Racine said Zuckerberg was “personally involved in Facebook’s failure to protect the privacy and data of its users leading directly to the Cambridge Analytica incident.” He added that the Cambridge Analytica breach led to a “multi-year effort to mislead users about the extent of Facebook’s wrongful conduct,” and that the suit “sends a message that corporate leaders, including CEOs, will be held accountable for their actions.” Politico noted that the case is similar to one that Racine launched against Meta in 2018, which is ongoing.

Here’s more on Facebook:

  • France and data: Arcom, France’s media regulator, launched a public consultation on Wednesday “to collect the expectations and needs of researchers, associations, journalists, etc., in the design of their research work on online platforms on the topic of researcher access to data from online platforms.” Article 31 of the European Union’s new Digital Services Act—which is expected to go into effect next year—regulates access by researchers to data from large digital platforms such as Facebook and Google.
  • COVID black box: Shirin Ghaffary, of Recode, wrote last year about how the lack of data from Facebook was making it difficult to determine the impact of covid misinformation, which President Biden said was putting people at risk by confusing them about the efficacy of vaccines. The reality, Ghaffary wrote, is that “we simply don’t know” what the real impact of that misinformation is, because the platform won’t provide enough data. “Right now, we’re guessing [on] a lot of stuff,” Katherine Ognyanova, an associate professor of communications at Rutgers University who co-leads the Covid States project, told Recode. “We can ask people questions. But Facebook truly has the data about what people have seen and how their attention is being devoted on the platform.”
  • Transparency: In an op-ed published in the New York Times, Haugen, the whistleblower who leaked internal documents about Facebook’s weak efforts at content moderation, wrote that “Europe is making social media better without restricting free speech [and] the US should too.” The Digital Services Act, which was recently passed by the European Union, is “a landmark agreement to make social media less toxic for users,” Haugen wrote. The new standards outlined in the bill, she argued, “will for the first time pull back the curtain on the algorithms that choose what we see and when we see it in our feeds,” in part because of the “ethic of transparency” on which they are based. Facebook’s poorly implemented existing content-moderation strategies, Haugen wrote, “leave those most at risk of real world violence unprotected.”
  • GDPR fail: The introduction of the General Data Protection Regulation (GDPR) in Europe in 2018 was expected by some to usher in a new era of penalizing the major platforms for improper use of personal information, but that doesn’t seem to have happened, Matt Burgess writes in Wired magazine. “One thousand four hundred and fifty-nine days have passed since data rights nonprofit NOYB fired off its first complaints under Europe’s flagship data regulation, GDPR,” he writes. The complaints allege that Google, WhatsApp, Facebook, and Instagram forced people into giving up their data without obtaining proper consent. Four years later, however, the group is still waiting for a final decision to be made in the case—and, says Burgess, “it’s not the only one.”

 

Other notable stories:

  • The Financial Times reported that Elon Musk will have to raise more cash in order to finance his proposed $44 billion acquisition of Twitter, after a $6.25 billion loan arrangement that was backed by his shares of electric carmaker Tesla expired. “After ditching the margin loan, the amount of equity that Musk must secure to complete the deal now stands at $33.5 billion,” the Times reported, based on a regulatory filing Wednesday. I wrote recently for CJR about how Musk said the Twitter deal was “on hold” until he could confirm that the company’s estimate of spam and fake accounts was correct—news that prompted speculation that Musk wanted out of the deal. In other Twitter news, the company agreed to pay $150 million to settle allegations that it misused private information, like phone numbers, to target advertising.
  • For Poynter, Alex Mahadevan interviewed Justin Peden, a twenty-year-old journalism student in Alabama who has been using a Twitter account called The Intel Crab to share open-source intelligence and reporting about violence in Ukraine for the past several years. The account was anonymous until earlier this year, when Peden revealed his identity. The open-source intelligence or OSINT community is “the Wild West in many ways,” he told Poynter. “This is an unfortunate downside to the community, and something more relevant than ever as the war continues. I’m no longer anonymous for one simple reason. I want to be held accountable for my work, positively or negatively.”
  • A study of journalists in Canada found what researchers called “alarming levels of stress,” according to a news release from the Canadian Journalism Forum on Violence and Trauma. It found mental health issues among journalists far above the average rate, with 69 percent of those surveyed saying they suffer from anxiety, 46 percent reporting depression, and 15 percent saying they had symptoms of post-traumatic stress disorder. One in ten respondents said they had thought about suicide. The study was based on 1,251 survey responses from freelancers, editors, frontline reporters, video journalists, and news executives.
  • Independent media failed to gain wider public trust in Russia because they did not try “to reach those that we don’t know,” according to Evgeniya Dillendorf, the former UK correspondent for Novaya Gazeta, an independent Russian newspaper that won a Nobel Prize for its reporting but has since been forced to suspend publishing due to new Russian media laws that make it a crime to distribute “misinformation” about the war in Ukraine. Dillendorf told Press Gazette that while she admired the leadership at Novaya Gazeta, “there was sometimes this kind of fashion to be indifferent, to be above the fray, not to interfere [and] I think that was a great mistake.”
  • Rick Edmonds, a media business columnist at Poynter, said he was “wrong, wrong, wrong” when he wrote in 2019 that USA Today would likely kill its print version within two years. Edmonds said he asked Maribel Perez Wadsworth, publisher of USA Today, what the business case is for continuing to put out a print paper. “The print edition of USA Today is a profitable product,” she said, “which we will continue to publish as long as readers want it.” Paid printed copies distributed at hotels—numbers of which, Edmonds says, are down by more than 90 percent since 2019—remain “a nice billboard for USA Today,” Wadsworth said. “People think about it, even if they don’t take a copy.”
  • A new website that published leaked emails from leading proponents of Britain’s exit from the European Union is tied to Russian hackers, according to a report from Reuters that was based on interviews with a Google cybersecurity official and the former head of UK foreign intelligence. “The website—titled ‘Very English Coop d’Etat’—says it has published private emails from former British spymaster Richard Dearlove, leading Brexit campaigner Gisela Stuart, pro-Brexit historian Robert Tombs, and other supporters of Britain’s divorce from the EU, which was finalized in January 2020,” Reuters wrote.
  • According to a report from the Globe and Mail, millions of students in forty-nine countries had their personal information sent to advertisers and data brokers after schools made the switch to online learning during the covid-19 pandemic. The Globe says it collaborated with twelve other media organizations to get access to data from Human Rights Watch, which “alleges online education platforms actively or passively infringed upon children’s rights by collecting and sharing their personal information, such as their locations and web browsing histories.” The Globe said the investigative collaboration was coordinated by the Signals Network, a French-American nonprofit organization that supports whistleblowers.

ICYMI: Three months of press threats in Ukraine

 

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.