The Media Today

Congress fails to grapple with social-networking algorithms

April 29, 2021
 

The history of Congressional hearings into the inner workings of Facebook, Twitter, and Google isn’t filled with penetrating insights or dogged investigation. For the most part, it’s been a series of sideshow-style events, with a lot of grandstanding by senators and members of Congress designed to get airtime on TV news shows and/or help with re-election bids, not to mention finger-waving about non-existent fears, such as the alleged bias social platforms like Facebook have against conservative voices. For every hard-hitting question about the ways in which these networks distort information or use personal data for ad targeting, there have been dozens more poorly-informed inquiries, like Republican Senator Orrin Hatch’s infamous question in 2018 about how Facebook makes money. “We sell ads, Senator” chief executive Mark Zuckerberg replied, no doubt overjoyed at such a softball pitch.

Given that backdrop, the likelihood of yet another Congressional hearing producing anything of substance was extremely low, especially since the one that just concluded on Tuesday—titled “Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape our Discourse and Our Minds”—didn’t involve any of the chief executives of Facebook, Twitter, or Google. (Senior staffers came instead.) The fact that there were no high-profile names attached helps explain why there were no front-page headlines with quotes from those involved, or video clips of senior executives being pigeonholed by a senator. In advance of the hearing, some argued the lack of big names might actually be a positive development, since there was less chance of the whole thing turning into a circus

So was this hearing notable for its depth or perspicacity? Not really. If anything, there was less outrage than there probably should be about the hidden algorithms that control what we see and do on social platforms.

At the start of the hearing, Democratic Senator Chris Coons said “there’s nothing inherently wrong” with how Facebook, Twitter, and YouTube use algorithms. He said the committee wasn’t considering any actual legislation and that the hearing was designed to be a listening session for legislators and the platforms. That sanguine description was at odds with some of the experts who testified, however, including Joan Donovan, who runs the Technology and Social Change project at Harvard’s Shorenstein Center. “The biggest problem facing our nation is misinformation-at-scale,” she told the committee, adding that “the cost of doing nothing is democracy’s end.” On Twitter, Donovan criticized the hearing for not going deeper. “The companies should have been answering questions about how they determine what content to distribute and what criteria is used to moderate,” she said. “We could have also gone deeper into the role that political advertising and source hacking plays on our democracy.” For its part, Facebook routinely argues that its algorithms merely give you more of what you say you want.

Congress faces a number of challenges when it comes to dealing with the algorithms that power Facebook, Twitter, and Google. Very little is known about how they work, and how and why they are tweaked. Facebook routinely talks about changes it has made to its News Feed algorithms and describes in very general terms what it is trying to do (highlight more personal content, for example) but the specifics are always kept secret. Twitter rarely says anything about the algorithms it uses, and Google never does. And Facebook and the other major platforms are protected by both the First Amendment and Section 230 of the Communications Decency Act. The first protects the rights of these companies to curate their content in whatever way they wish (within certain limits); the second not only reinforces that right to curation, but also protects them from legal liability for any content they host that is created by their users.

There are attempts underway to limit some of Section 230’s protections, including two proposed laws that were brought up during the hearing. One is the “Protecting Americans from Dangerous Algorithms Act,” which would remove Section 230 liability protection from any platform if its algorithms are used to “amplify or recommend content directly relevant to a case involving interference with civil rights… or in cases involving acts of international terrorism.” The problem with taking this approach, according to critics like Will Duffield at Techdirt, is that if the platforms are exposed to liability in such cases, “they will cleanse [their platforms] of potentially extreme, though First Amendment protected, speech. This amounts to legislative censorship by fiat.” In addition, Duffield and others argue, much of the radicalization and disinformation that Congress is concerned about occurs in private groups and messaging services, which the proposed legislation would not address at all. These kinds of laws are not only likely to fail to achieve their goals, but would also make everyone’s experience on social platforms like Facebook much less safe, says the Electronic Frontier Foundation.

Sign up for CJR's daily email

Here’s more on the platforms:

  • Failure to prevent: BuzzFeed News revealed last Thursday that an internal Facebook report criticized the company for failing to prevent the “Stop the Steal” movement from using its platform to incite the January 6 attack on the US Capitol. The report discusses how Facebook missed critical warning signs about the growth and influence of the movement, and concludes “the company was unprepared to stop people from spreading hate and incitement to violence.” The report’s authors published the document to Facebook’s internal message board, making it available to company employees, but it was later removed.
  • Not reconcilable: Justin Hendrix, co-founder, chief executive, and editor of Tech Policy Press, writes that Republican Senator Ben Sasse made the most perceptive point at the hearing when he argued that the answers the platforms were providing about the way their algorithms function were simply “not reconcilable” with the positions of their critics. This, Hendrix argues, puts the focus back on lawmakers, “whose duty it is to reconcile the interests of society and democracy with the business interests of the platforms.”
  • Undue influence: Members of Congress are reportedly looking into whether Google tried to influence a critic’s testimony at a hearing last week about the future of app stores. Senators Amy Klobuchar and Mike Lee have asked for the details of an alleged phone call between a Google employee and a Match Group employee prior to a hearing before a Senate Judiciary subcommittee, during which Match and other Google critics accused the company of using its monopoly power to curb competition. According to one report, a Google executive called Match after the testimony to ask why the company’s comments didn’t jibe with previous comments it had made about Google and its app store.

 

Other notable stories:

  • Project Veritas has filed a defamation lawsuit against CNN for saying during a broadcast in February that the group’s account was suspended from Twitter as part of a crackdown by the social network on users who spread misinformation, according to the Hollywood Reporter. The complaint says the Project Veritas account was actually suspended because it included the personal information of other users without their consent, something it says CNN should have known.
  • USA Today is experimenting with a paywall for some of its news stories, says the Poynter Institute. Earlier this month, the flagship of the Gannett chain started putting some of its stories behind a paywall, asking readers to sign up for a digital-only subscription at $4.99 a month, according to Poynter. Besides a short note that appeared along with the request, Gannett has been mum about moving USA Today content behind a paywall, though a spokesperson for the chain confirmed to Poynter that it is testing such a service.
  • Emily Bell, director of the Tow Center for Digital Journalism at Columbia University, writes about how legislation like Australia’s new bargaining code for social platforms risks putting too much power in the hands of Facebook and Google. “The disappearance of advertising support and the consequent collapse of local journalism is one of the most effective tools being used to leverage more regulatory oversight against the platforms,” she writes. “But the scramble to cross-subsidize leaves unanswered the uncomfortable question of whether this close relationship of corporate power and supposedly accountability journalism is something that needs dissolving rather than encouraging.”
  • A tweet from an Oracle executive that included the Signal and email account info of a female Intercept reporter was found to have violated Twitter’s policies, according to a report by Gizmodo. Ken Glueck, a vice-president with the software company, was forced to take down the tweet and had his account suspended in read-only mode for 12 hours, the social network confirmed. The reporter, Mara Hvistendahl, recently published a story on how reseller networks in China enable the government to acquire Oracle’s technology.
  • A new survey of news consumers found that most preferred “solutions journalism” stories to traditional news reports. Respondents said stories containing proposed solutions were more engaging than traditional news stories, according to the survey firm, SmithGeiger. The survey, which was commissioned by the Solutions Journalism Network, found that these results were consistent across all ages and political persuasions, the firm reported; while the study looked at consumers of TV news content, SmithGeiger said it believes that the results would hold for other platforms as well.
  • The Trans Journalists Association released a statement saying newsrooms should allow trans journalists to retroactively change their bylines once they have come out, to replace their “dead name” with the name that they have chosen to use. “We charge that it’s inappropriate not to retroactively change a trans journalist’s byline when that is what the journalist in question requests,” the statement says. “We strongly urge newsrooms and media organizations to change a trans journalist’s byline to reflect their lived experience without issuing a correction or a disclaimer.” The New York Times is reportedly fighting with its union over a request to allow bylines to be retroactively changed.
  • Technology news magazine Protocol wrote about News Break, an app that is run almost entirely by algorithms and artificial intelligence. According to the report, it is the most downloaded news app in the world—more than the New York Times, BBC, and even Google News. Publishers are ecstatic about the amount of traffic it drives to their sites, according to the Protocol report, and the content it carries is all local. The app was founded by a former Yahoo executive and a former Baidu executive with experience in China, and is similar to other news apps that have become popular in that country, such as Toutiao.
  • Last week, Time magazine started accepting Bitcoin and 31 other types of cryptocurrencies from paid subscribers, through a partnership with Crypto.com, according to a report by Digiday. The magazine has also started letting sponsors pay in Bitcoin for their advertising campaigns, and a crypto asset manager named Grayscale reportedly signed the first deal two weeks ago. (The terms of the partnership with Crypto.com and the sponsorship deal were not disclosed, Digiday says.) Time was bought in 2018 by Marc Benioff, the billionaire chief executive of software company Salesforce.
Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.