Facebook’s belated, vague, unhelpful election idea

On Thursday, Mark Zuckerberg, the chief executive of Facebook, wrote of his worry that “with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country.” So Facebook announced a series of steps designed, Zuckerberg said, to verify election-related content and prevent disinformation or attempts at meddling. In the week leading up to the election, for example, new political ads will be blocked. “I generally believe the best antidote to bad speech is more speech,” Zuckerberg wrote, “but in the final days of an election there may not be enough time to contest new claims. So in the week before the election, we won’t accept new political or issue ads.” Other steps: removing obvious misinformation, adding an information label to posts that make claims about voting, and limiting the number of times users can forward Facebook messages.

In the past, Facebook has been reluctant to manage its pages; this time, Zuckerberg was widely criticized for doing too little, too late. “Facebook has repeatedly fumbled its responsibility to protect our democracy,” Senator Elizabeth Warren said. “Now the stakes are higher than ever—and they need to do more than make small, performative tweaks.” Jackie Speier, a Democrat representing California’s fourteenth district in Congress, said the moves headed in the right direction, but wondered, on Twitter, “What about political ad campaigns already pumping out false info; algorithms that amplify lies, violence & misogyny; the refusal to fact check ads or take down distorted videos; & Facebook groups that become echo chambers for false info?” Others were underwhelmed, too.

As Steve Kovach, of MSNBC, pointed out in a rundown of Facebook’s new rules, the changes don’t really do anything to stop anyone—including Donald Trump and his campaign—from posting misinformation about voting or about particular candidates on their personal or campaign pages, so long as the posts aren’t advertisements. Nor is Zuckerberg shutting down misinformation or misleading campaign ads that were created more than a week before the election. The rule that restricts the number of people to whom someone can forward a Facebook message—a change Facebook first made on its WhatsApp messaging service, in an attempt to reduce the spread of misinformation in India—can be easily avoided by sharing messages in batches, or as part of a collective action. The new policies are so limited, Kovach said, “that there’s no chance they’ll have any impact on the political discourse and news consumption across the site.”

ICYMI: Journalism’s Gates keepers

In some cases, the new rules conflict with each other. Zuckerberg’s message promised that Facebook would “extend our work with election officials to remove misinformation about voting,” but he also wrote that Facebook would attach “an informational label” to any material that seeks to delegitimize voting methods or the outcome of the election. In other words, it appears that, sometimes, Facebook will remove disinformation about voting; other times, it will just apply a warning label. How will the company determine what to do when? That, like so much of what Facebook does involving content moderation, is unknown. And the fact that a single company has so much control over crucial election information is more than a little disturbing, as Zeynep Tufekci, a technology scholar and sociologist at the University of North Carolina, has pointed out.

In typical fashion, responding to a serious threat, Facebook has announced something that sounds vaguely important but, upon review, it’s nonsense. What comes next is the implementation of these rules, which will be left to a campus full of anonymous twenty-something programmers, under the direction of a single thirty-something multi-billionaire. As Tufekci put it, “Mark Zuckerberg, alone, gets to set key rules—with significant consequences—for one of the most important elections in recent history. That should not be lost in the dust of who these changes will hurt or benefit.”

Sign up for CJR's daily email

Here’s more on Facebook and disinformation:

  • The scales: Tara McGowan, the chief executive of a liberal nonprofit group called Acronym, said in a statement that the changes by Facebook not only won’t help reduce disinformation, but could actually wind up assisting the Trump campaign, according to the New York Times. “By banning new political ads in the final critical days of the 2020 election, Facebook has decided to tip the scales of the election to those with the greatest followings on Facebook—and that includes President Trump and the right-wing media that serves him,” McGowan said. A Trump spokesperson, meanwhile, said that the president’s campaign would be “silenced by the Silicon Valley mafia, who will at the same time allow corporate media to run their biased ads to swing voters.”
  • War games: Facebook, Google, Twitter, and Reddit are holding regular meetings together, along with federal law enforcement and intelligence agencies, to discuss potential threats to election integrity, according to a report from Axios, and some of the platforms have also been engaging in “war games” to practice responding to potential scary scenarios. Between March 1 and August 1, the report said, Twitter practiced its response to situations such as “foreign interference, leaks of hacked materials and uncertainty following Election Day.”
  • False choice: In April, CJR used its Galley discussion platform to hold a series of interviews with disinformation researchers about what Facebook and the other digital platforms should be doing to confront the problem. The interviewees included Karen Kornbluh and Ellen Goodman, co-authors of a new paper published by the German Marshall Fund called “Safeguarding Digital Democracy.” The policy debate on disinformation “has been hobbled by a false choice between allowing platforms or the government to censor,” Kornbluh said. “We propose instead empowering citizens through updating offline protections and rights (consumer protection, civil rights, privacy, campaign finance), supporting journalism and increasing accountability of platforms.”

 

Other notable stories:

 

  • Apple is delaying changes to its mobile operating system that could have a significant impact on publishers. The changes, when they’re made, will require apps to ask users whether or not they want their web activity to be tracked; some publishers are afraid that most people will opt out, which would make  personalized ad-targeting impossible. In a corporate statement on Thursday, Apple pushed the change to sometime next year, to give app makers a chance to make adjustments to account for the new protocol.
  • In recent days, prominent Republicans have posted several deceptively edited videos to Twitter and Facebook; according to a report from CNN, some racked up millions of views before either platform took action. One was a false video about Joe Biden, the Democratic presidential nominee, posted to the Twitter account of Steve Scalise, House Minority Whip. Only after public criticism ensued—and a person in the video attested that his words had been distorted—did Twitter add a label to the post describing the video as “manipulated media.” The same video was posted to Facebook, which did nothing.
  • Charles Richardson writes for CJR about a local election and the impact of the coronavirus on Macon, Georgia, as part of the Year of Fear series, in which CJR and the Delacorte Review report on what’s happening in towns across the United States. “This election was shaped by a force that none of the candidates in the runoff––for mayor, county commission, or school board––control or predict: COVID-19. Certainly, the pandemic should receive part of the blame for lower turnout. Most houses of worship in the area were still meeting virtually, if at all. Standard ways of campaigning were tossed aside as candidates participated in Zoom debates and virtual town halls. The usual down-home glad-handing was nowhere to be found.”
  • Reuters spoke with six freelance journalists who wrote for a site called Peace Data, which Facebook and Twitter—acting on a tip from the FBI—identified as the center of a Russian political-influence campaign targeting left-wing voters in the United States, Britain, and other countries. Reuters reviewed emails between the Peace Data contributors and their employer showing that they were paid up to $250 per article. A person who identified herself as Bernadett Plaschil, an associate editor at Peace Data, told Reuters: “We’re really confused by these accusations and deny all of them.”
  • Despite Zuckerberg’s assurances that Facebook removed an event page where people discussed gathering in Kenosha, Wisconsin, to shoot and kill protesters, BuzzFeed News reports that Facebook never did. The event page was taken down by the militia group that created it, after two people were killed. More than four hundred complaints were sent to Facebook about the event page, but even after multiple reviews, the company decided it did not violate any rules. Zuckerberg later told employees this represented “an operational mistake,” and that the event page had been removed. But internal company discussions obtained by BuzzFeed show that’s not true.
  • Dan McCrum, of the Financial Times, described the intimidation, surveillance, and conspiracy theories he encountered while conducting a five-year investigation into Wirecard, a much-hyped financial services company that turned out to be a billion-dollar fraud. “A couple of years after the Twitter bots attacked me, Wirecard is a smouldering wreck,” he wrote. “The ex-chief executive, Markus Braun, is in jail, awaiting trial. This is the tale of what it was like to unravel and expose the reality of a criminal enterprise that relied on a network of professional enablers to keep in motion one of the biggest corporate frauds of the modern era.”
  • Tim Davie, the BBC’s new director general, said that he wants to maintain a mandatory license fee, not impose a voluntary subscription, because doing so “would make us just another media company” that serves only the few. As the BBC reported, he also spoke to staff about making a renewed commitment to impartiality. “It is not simply about left or right,” he said. “This is more about whether people feel we see the world from their point of view. If you want to be an opinionated columnist or a partisan campaigner on social media then that is a valid choice, but you should not be working at the BBC.”
  • The editor of the Loyola Phoenix, the student newspaper at Loyola University in Chicago, wrote a column responding to criticism about how her paper reported on Black Lives Matter protests in which several Loyola students were arrested. “We weren’t asked to not record videos and had we been we would’ve declined anyway,” the editor wrote. “To my knowledge, requests weren’t made until after the videos were already on Twitter. Our reporters posted videos of the arrests on social media because they occurred in a public space—North Sheridan Road. We don’t need consent for videos or photos taken of people in public. We didn’t take them down because that’s not what media outlets typically do. If something is blatantly wrong or inaccurate, we publish a correction. But nothing was inaccurate here.”

ICYMI: Jacob Blake, Breonna Taylor, and the arbitrariness of the police-shooting news cycle

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Mathew Ingram is CJR’s chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.