Sign up for The Media Today, CJR’s daily newsletter.
With all that has transpired between Facebook and the media industry over the past couple of years—the repeated algorithm changes, the head fakes about switching to video, the siphoning off of a significant chunk of the industry’s advertising revenue—most publishers approach the giant social network with skepticism, if not outright hostility. And yet, the vast majority of them continue to partner with Facebook, to distribute their content on its platform, and even accept funding and resources from it.
Given that Facebook has not only helped hollow out newsrooms across the country but arguably lowered the overall quality of civic discussion, repeatedly flouted laws around privacy in ways that have served the needs of foreign actors like the Russian government, and played a key role in fomenting violence in countries like Myanmar and India, it’s worth asking: Is it enough to be skeptical? Or is there an ethical case to be made that media companies, and the journalists who work for them, should sever their ties to Facebook completely?
The argument in favor of staying on Facebook is obvious: the social network has immense reach—2 billion monthly active users—, which provides publishers with the potential to increase their readership. Facebook also has billions of dollars to spread around, whether it’s through advertising revenue sharing, or by funding journalism initiatives, to which it recently committed a total of $300 million over the next three years. Together, Facebook, Twitter, and Google have become the biggest journalism funders in the world, a sad irony given their effects on the business.
Traffic from Facebook has been declining for many publishers, as the social network tweaks its algorithm to focus more on personal sharing. But even so, Facebook continues to drive a lot of revenue. So if you’re a publisher and you want to stay in business, you really have no choice but to work with it. The only other option is to continue to publish to a smaller and smaller group of readers, bringing in smaller amounts of ad revenue every year. Many media outlets have done the math, and decided they have no other option but to play ball—even if means playing ball with a company that not only owns the ball, but also the stadium, and all the uniforms, and the broadcast rights for all the games.
Some take the case even further. At a recent journalism conference in Perugia, City University of New York journalism professor Jeff Jarvis moderated a panel entitled “Criticize Facebook? Sure. Leave? Why?” Jarvis has argued that media companies shouldn’t just passively use Facebook, but should take advantage of the company’s knowledge about how social networks function to learn how to serve their audiences better. (The News Integrity Initiative, which Jarvis helped create, receives funding from Facebook, but he says this doesn’t affect his views about the company).
The panel featured Jesper Doub, a former Der Spiegel journalist who is now Facebook’s director of news partnerships for the EMEA region, Jennifer Brandel of Hearken, former Guardian editor-in-chief Alan Rusbridger, and James Ball, a UK journalist. While many on the panel (apart from Doub) were skeptical of Facebook’s relationship with the media, most seemed to agree with Ball that, despite a multitude of sins, media companies still needed to be on the platform because “that’s where the people are.” Individuals might want to quit the social network, Ball said, but media companies would be stupid to do the same.
Mandy Jenkins, president of the Online News Association and former editor-in-chief of Storyful, wrote recently that quitting Facebook would amount to deserting the people who use it for news, and condemn them to a fact-free, disinformation-fueled future ruled by trolls. In other words, she suggests that those of us who are serious about journalism have an ethical obligation to remain, and to do whatever we can to improve the information environment there.
“Facebook and its subsidiary tools like Instagram and WhatsApp are where billions of people still come together,” Jenkins writes, “which means we still have to be there too,” not just as companies but as individuals. Quitting the platform is “taking the easy way out,” she says.
This may be the most ethically powerful argument for not abandoning Facebook. But it’s not enough. The fact of journalism today is that working with the giant tech platform in any capacity amounts to a Faustian bargain. The benefits of doing business with Facebook don’t begin to outweigh the ethical compromises required to do so.
It’s not just the pressure that Facebook’s massive size and advertising-industry dominance have placed on the media industry—that’s more of a business consideration for media companies than an ethical one. The ethical questions arise not because of Facebook’s dominance, but because of what it’s had to do to put itself in that position. Journalists using Facebook tacitly lets the platform off the hook, by endorsing its scale as a separate benefit, detached from the harm it does.
The benefits of doing business with Facebook don’t begin to outweigh the ethical compromises required to do so.
Take the harvesting of personal data on more than 2.5 billion people, for example. Users of Facebook provide this information willingly, and in return for it they get to connect and share photos with their friends for free. But Facebook also handles that data in problematic ways, as evidenced by the Cambridge Analytica scandal—in which the personal data of more than 50 million people was shared with researchers and then sold as the raw material for psychographic profiles, created as part of an attempt to influence the 2016 US election and the UK’s Brexit vote, among other things.
This breach was so significant that the Federal Trade Commission came to the conclusion that it broke the terms of a so-called “consent decree” that Facebook agreed to in 2011, in which the company promised to uphold user privacy. The company can now be hit with a fine of as much as $5 billion. (For context, Exxon paid a fine of $125 million for the 1989 Valdez oil spill, or about $260 million in current dollars.) The consent decree was mandated by the FTC because the commission found that Facebook “deceived consumers by telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public.”
Facebook claims that it has closed the loopholes that Cambridge Analytica used. But new ones keep magically appearing: the company admitted recently that it uploaded the email contacts of millions of users without telling them. There have also been repeated leaks of data on millions of users, where their information has been shared when it shouldn’t have been. What happened to those contacts? No one knows.
At this point, Facebook resembles a Valdez-era industrial conglomerate, continually leaking hazardous materials into the rivers and lakes and municipal water supplies, but every time it gets caught it promises to do better. Why would anyone partner with such a company? By providing their content and information on their readers and subscribers, media companies are complicit in those data collection practices.
Then there’s Facebook’s obvious role as a distributor of misinformation—and not just fake news about Pizzagate, or Hillary Clinton, but hate speech and propaganda that contributed to the endangerment of the lives of hundreds of thousands of Rohingya in Myanmar, who were forced from their homes and in some cases killed. Radical, anti-Muslim groups used Facebook as their pulpit to argue that the Rohingya needed to be exterminated, and so did members of the Myanmar military apparatus, who helped fuel the hatred. In a report on the crisis, the United Nations described it as “a genocide,” and said Facebook was partly to blame.
It’s not just Myanmar. In dozens of other countries, Facebook has helped inflame cultural wars and put people at risk, in part because it hasn’t spent the time to understand the local environment, or doesn’t have anyone on the ground who can raise a warning flag. This despite a workforce of more than 30,000 people and a profit last year of $6.8 billion.
In India, Facebook-owned WhatsApp is the most popular form of communication for hundreds of millions of people, which means that it has also become the primary method for sharing hateful gossip about various cultural groups within that country. Dozens of people have been killed by mobs in rural areas in India based on rumors and fake news reports about kidnappers trying to snatch people’s children.
A number of the company’s earliest backers have repudiated it for this reason and say they now believe it might actually be a force for ill. Roger McNamee, a legendary Silicon Valley venture capitalist and an early investor in Facebook, says the company he advised when it was small is now “a threat to democracy” because of the way it distributes misinformation and hate. McNamee, who recently published a book entitled Zucked: Waking Up to the Facebook Catastrophe, has also said that Zuckerberg and his fellow Facebook founders knew there were potential negative effects of the giant network they wanted to build, but they went ahead anyway.
In an infamous internal memo, Facebook senior executive Andrew “Boz” Bosworth said: “Maybe someone dies in a terrorist attack coordinated on our tools. The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good.” After the memo became public last year, Bosworth argued that this was a theoretical position he took as part of an internal debate, but it’s a pretty good summation of how Facebook has approached its growth—unrelentingly, and without concern for its effects.
Former high-level Facebook executive Chamath Palihapitaya, like Tristan Harris and his Center for Human Technology, is convinced that Facebook is a potentially negative force in society in part because it uses psychological tricks to compel users—especially young ones—to revisit the site and spend more time there. Some psychologists are convinced that time spent on social networks like Facebook (and Instagram) can actually be bad for mental and psychological health, in part because they encourage users to compare themselves to others. Former Mozilla developer Aza Raskin called tricks used by such networks “behavioral cocaine.”
There are plenty of obvious reasons to be on Facebook—there are billions of people there, it doesn’t cost anything, and there’s the chance that you might get some kind of revenue boost. But there are also some pretty powerful reasons why you might want to rethink your relationship with Facebook. How much of what you are doing serves the company’s interests rather than yours, or the interests of journalism, or society in general?
Mathew Ingram was CJR’s longtime chief digital writer. Previously, he was a senior writer with Fortune magazine. He has written about the intersection between media and technology since the earliest days of the commercial internet. His writing has been published in the Washington Post and the Financial Times as well as by Reuters and Bloomberg.