Algorithm accountability is easier said than done
Over the past several years, Congress has held a seemingly never-ending series of hearings concerning “Big Tech,” the handful of companies that control much of our online behavior: Facebook, Twitter, and Google. Congressional committees have looked into whether the platforms allowed foreign agents to influence the 2016 election, whether their algorithms suppress certain kinds of speech, and whether they harm young women; in many cases, the hearings have also been a forum for grandstanding. This week saw the latest in the series, a hearing by the House Energy and Commerce Committee, called “Holding Big Tech Accountable: Targeted Reforms to Tech’s Legal Immunity.” The subject of the hearing was a piece of legislation that has been an ace in the hole for the platforms in all of their other congressional appearances: Section 230 of the Communications Decency Act.
Section 230 protects electronic service providers from liability for the content posted by their users—even if that content is harmful, hateful, or misleading. For the past few years, pressure has built within Washington for lawmakers to somehow find a way around it. That pressure came to a head in 2020 when former president Donald Trump, who had expressed concerns over alleged censorship of conservative speech on social media, signed an executive order asking the Federal Trade Commission to do something about Section 230 (even though the agency has no legal right to do so). Before he became president, Joe Biden said that he believed Section 230 “needs to be revoked, immediately”; since he took office, legislators have put forward a number of proposals in an attempt to do that. A recent proposal from Democratic Senator Amy Klobuchar would carve out an exception for medical misinformation during a health crisis, making the platforms liable for distributing anything the government defines as untrue.
Republican members of Congress have introduced their own proposals for a host of other Section 230 carve-outs, aimed at forcing platforms to keep certain kinds of content (mostly conservative speech) while forcing them to remove others, such as cyber-bullying. This week’s hearing was held to consider a number of other pieces of legislation aimed at weakening or even dismantling Section 230. They include one supported by four of the top Democratic members of the Energy and Commerce Committee, called “The Protecting Americans From Dangerous Algorithms Act,” which would open the platforms to lawsuits for making personalized recommendations to users that cause them harm. At least some of the hearing was taken up—as many previous ones have been—with statements from Republican members about how platforms like Facebook and Twitter allegedly censor conservative content, which studies have shown is not true.
Frances Haugen, the former Facebook staffer turned whistleblower who leaked thousands of documents to the Wall Street Journal and then to a consortium of other media outlets, has helped fuel the desire to hold the platforms to account. During her testimony this week, she took time to remind the committee that well-meaning efforts to do so can have unintended side effects. The 2018 law known as FOSTA-SESTA, for example, was designed to prevent sex trafficking, but Haugen noted that it also made things more difficult for sex workers and other vulnerable people. “I encourage you to talk to human rights advocates who can help provide context on how the last reform of 230 had dramatic impacts on the safety of some of the most vulnerable people in our society but has been rarely used for its original purpose,” she said, according to Mashable.
This message was echoed by others who testified at the hearing (the first of two; the second is scheduled for next week). “It’s irresponsible and unconscionable for lawmakers to rush toward further changes to Section 230 while actively ignoring human rights experts and the communities that were most impacted by the last major change to Section 230,” Evan Greer, director of Fight for the Future, told the committee. “The last misguided legislation that changed Section 230 got people killed. Congress needs to do its due diligence and legislate responsibly. Lives are at stake.” According to a recent review of the legislation by human-rights experts, FOSTA-SESTA has had “a chilling effect on free speech, has created dangerous working conditions for sex-workers, and has made it more difficult for police to find trafficked individuals.”
A number of critics of the more recent legislative attempts to do an end-run around Section 230 have also pointed to the difficulty of targeting the things that algorithms do, since there are a multitude of algorithms that are used by different platforms to do different things—to recommend content to users, for instance, but also to sort it and filter it—and defining which ones are bad and why is not easy. “I agree in principle that there should be liability, but I don’t think we’ve found the right set of terms to describe the processes we’re concerned about,” Jonathan Stray, a visiting scholar at the Berkeley Center for Human-Compatible AI, told the House subcommittee hearing. “What’s amplification, what’s enhancement, what’s personalization, what’s recommendation?” If scientists and tech scholars have difficulty answering these questions, it seems unlikely that Congress will.
Here’s more on Section 230 and the platforms:
- Who’s on First: Using CJR’s Galley platform, I held a series of discussions about Section 230 earlier this year with a group of experts in law and technology, including Jeff Kosseff, a law professor at the Naval Academy and author of a history of Section 230; Mike Masnick, who runs Techdirt and co-founded the Copia Institute, a technology think tank; Mary Anne Franks, a law professor at the University of Miami; and Eric Goldman, a law professor at Santa Clara University. “To the extent that people want to force social media companies to leave certain speech up, or to boost certain content,” said Franks, “their problem isn’t Section 230, it’s the First Amendment.”
- Do no harm: When it comes to Section 230 reform, “first, policymakers should do no harm,” Cameron Kerry, a former Obama administration official, said in his remarks to an April workshop held by the National Academies of Science, Engineering, and Medicine’s Committee on Science, Technology, and Law. “Ill-conceived changes to Section 230 actually could break the internet,” he said. “Many proposed solutions—such as mandating content moderation, imposing common carrier obligations, or outright repeal—present potential unintended consequences, including diminishing freedom of expression.”
- The road to nuance: Daphne Keller, a former associate counsel at Google who directs the Program on Platform Regulation at Stanford’s Cyber Policy Center, wrote in a paper published by the Knight First Amendment Institute that the desire to regulate recommendation or amplification algorithms is understandable, but a long way off. “Some versions of amplification law would be flatly unconstitutional in the US,” she writes. “Others might have a narrow path to constitutionality, but would require a lot more work than anyone has put into them so far. Perhaps after doing that work, we will arrive at wise and nuanced laws regulating amplification. For now, I am largely a skeptic.”
Other notable stories:
- The St. Louis Post-Dispatch reports that, prior to blaming one of its reporters for “hacking” its website by decoding some HTML, the Department of Elementary and Secondary Education “was preparing to thank the newspaper for discovering a significant data vulnerability, according to records obtained by the Post-Dispatch through a Sunshine Law request.” A press release expressing gratitude towards the newspaper was prepared, the paper reported, but the next day, “the Office of Administration issued a news release calling the Post-Dispatch journalist a ‘hacker.’” State police later launched a criminal investigation into the incident, which the paper said is still ongoing.
- On Thursday, two Georgia election workers who were targeted by a right-wing campaign claiming they manipulated ballots, filed a defamation lawsuit against The Gateway Pundit, a right-wing news site, the New York Times reports. “The suit was filed by Ruby Freeman and her daughter, Shaye Moss, both of whom processed ballots in Atlanta during the 2020 election for the Fulton County elections board,” the Times said. “It follows a series of defamation claims filed by elections equipment operators against conservative television operators such as Fox News, Newsmax and One America News.”
- The union representing 61 members of BuzzFeed’s newsroom held a virtual walkout for 24 hours on Thursday, as the company prepares to go public by merging with a special purpose acquisition vehicle. “There is no future of BuzzFeed without the workers, no product for them to take public,” said Addy Baird, chair of BuzzFeed News’ union, according to a report in New York magazine. Meanwhile, BuzzFeed’s merger has not met with as much interest as the company hoped, and is expected to generate less revenue than originally planned, according to Alex Weprin of the Hollywood Reporter.
- Meta, formerly known as Facebook, published a year-end Adversarial Threat report in which the company describes how it found and removed six networks of accounts for what it calls “coordinated inauthentic behavior,” including operations in China, Palestine, Poland, and Belarus. The company also said that it is expanding a beta project in which it shares data from its CrowdTangle data-tracing unit with security researchers, which it hopes will make it easier to find similar behavior. Facebook has been criticized by researchers and journalists for not sharing enough data with external sources.
- The New York Post says staffers of Meredith’s Shape magazine were laid off, but then asked them to continue working as freelancers on a new magazine called Sweet July, run by Food Network star Ayesha Curry. “Employees of Shape magazine—for which Meredith shuttered print operations last month—had been pulling double duty working overtime on Curry’s lifestyle mag, but without receiving additional pay,” the paper says. A source told the Post: “They were laid off without notice, but it seemed the company forgot the team that had been fired was also working on Curry’s magazine.”
- In a report for the Tow Center at Columbia’s School of Journalism, Jakob Nelson looked at the impact of social-media policies in newsrooms. “Journalists have learned that engaging “with their audiences via social media platforms carries personal and professional risks—namely accusations of political bias that can lead to termination from their jobs, as well as trolling, doxing, and threats of physical violence,” Nelson wrote. “This report examines the extent to which newsroom managers help—or hinder—their journalists when it comes to navigating the risks and challenges of audience engagement via social media platforms.”
- Meghan Markle won the latest round of her long-running lawsuit against Associated Newspapers Limited, the publishers of the Daily Mail, the Mail on Sunday, and Mail Online in Britain, when a judge dismissed an appeal by the company, the Daily Beast reported. “Meghan was suing ANL for invasion of privacy and violating her copyright after ANL published extensive sections of a ‘deeply personal’ hand-written letter she sent to her estranged father shortly after her wedding to Harry,” the Daily Beast said, adding that a judge earlier this year granted Markle a summary judgment, which meant “he had unilaterally decided there was absolutely no prospect of ANL succeeding.”
- Canadian media companies expect Google and Facebook to start paying them as much as $100 million a year once the government passes legislation requiring the platforms to strike deals with publishers, the Press Gazette reported. The news bargaining code is expected to be similar to one that Australia passed, which forced the technology companies to license content or face compulsory arbitration and financial penalties. “Senior industry sources spoken to by Press Gazette expect the legislation to come into force by the summer or early autumn,” the magazine reported.
Political misinformation, and a matter of scale
In October of last year, during the runup to the presidential election, the New York Post dropped what looked like a bombshell story. It alleged that a laptop belonging to Joe Biden’s son, Hunter, had been found in a repair shop, and that emails taken from this laptop allegedly implicated the Bidens in a political-influence […]
What can we do about society’s ‘information disorder’?
IN JANUARY, the Aspen Institute set up a Commission on Information Disorder, and announced a star-studded group of participants—including Katie Couric, former global news anchor for Yahoo; Jameel Jaffer, executive director of the Knight First Amendment Institute; Yasmin Green, director of research at Google’s Jigsaw project (who took part in CJR’s symposium on disinformation in […]
Facebook’s metaverse shift smacks of desperation
Two weeks ago, Alex Heath of The Verge reported that the company then known as Facebook was planning to rename itself. An anonymous source told Heath that the new name was intended to direct attention away from the company’s existing services (including WhatsApp, Instagram, and the social network itself) and toward its embrace of “the […]
A tale of two Facebook leaks
In April, several months before a Facebook employee named Frances Haugen became the most famous whistleblower in the company’s history, The Guardian’s Julia Carrie Wong published a series of damning stories about Facebook. Wong’s pieces described how Facebook ignored the spread of disinformation and harassment in countries such as Azerbaijan and Honduras, where authoritarian leaders […]
On the Facebook Papers media strategy
In September, the Wall Street Journal published a series of critical stories about Facebook that were based on a trove of hundreds of internal documents from an unnamed former employee of the company. Three weeks ago, that whistleblower revealed herself on 60 Minutes as Frances Haugen, a former product manager who said she became concerned […]
British MP’s death intensifies calls for end to online anonymity
Last Friday, David Amess, a 69-year-old British member of parliament, was stabbed to death while hosting an open house for his constituents at a church in Leigh-on-Sea, a town in southeastern England. Ali Harbi Ali, the 25-year-old son of a former advisor to Somali’s prime minister, was later arrested and charged with Amess’s murder. In […]
What would an effective social-media regulator look like?
Last week, Facebook whistleblower Frances Haugen testified before a Senate subcommittee about the company’s propensity for disregarding its own research into the harms done by its content algorithms, particularly among young girls who use Instagram, its photo-sharing site. During her testimony, Haugen recommended regulatory oversight that would impose standards of behavior on the social network […]
Whistleblower turns up the heat on Facebook and Instagram
Last month, the Wall Street Journal published a series of investigative news stories about Facebook, alleging a pattern of questionable behavior at the social network and its photo-sharing service, Instagram. One said that changes to the Facebook news feed algorithm, which were purportedly designed to improve the news-reading experience, actually had the opposite effect and […]
Facebook goes on the offensive against critical reporting
In the aftermath of the 2016 presidential election, and facing widespread criticism that it had helped to destabilize the process by enabling Russian trolls and spreading disinformation, Facebook seemed to strike an apologetic tone. Mark Zuckerberg, Facebook’s co-founder and chief executive, occasionally seemed defensive in his subsequent testimony before Congress, but the general sense was […]
Journal series reveals concerns, inaction at Facebook
In 2018, Mark Zuckerberg, co-founder and chief executive of Facebook, said that the company was rolling out a significant change to the algorithm that governs its News Feed, in an attempt to encourage more users to interact with content posted by their friends and family, rather than content from “businesses, brands, and media”—including news publishers. […]
Facebook plans to show users even less political news
In February, Facebook announced an experiment to test how much political news users wanted in their news feeds. It removed some content for a small group of users in the US, Canada, Brazil, and Indonesia, and then surveyed those users for their reactions. According to an update published on Tuesday, the company saw “positive results”, […]
Facebook “transparency report” turns out to be anything but
Last week, Facebook released a report detailing some of the most popular content shared on the site in the second quarter of this year. The report is a first for the social network. Guy Rosen, Facebook’s vice president of integrity, described the content review as part of “a long journey” to be “by far the […]
Apple’s plan to scan images on users’ phones sparks backlash
Earlier this month, Apple announced a series of steps it is taking to help keep children safe online. One of those new additions is a feature for its Siri line of intelligent assistants that will automatically suggest a help-line number if someone asks for child-exploitation material, and another is a new feature that scans images […]