BuzzFeed to social platforms: More transparency, please

Amid publishers’ ongoing handwringing about social platforms’ control over media, BuzzFeed has become a poster child for embracing third-party distribution networks. This progressive view has helped it gain a massive global audience. But on Friday, days after news broke that BuzzFeed had inked a $3.1 million deal to produce Facebook Live videos, company brass called on Facebook and similar platforms to codify their internal processes for regulating speech.

“We don’t know why Facebook takes down some stories or some posts but not some others,” BuzzFeed Assistant General Counsel Nabiha Syed said during a panel discussion at Columbia Journalism School’s Tow Center. The event followed up on a piece penned by Syed and BuzzFeed Editor in Chief Ben Smith on Medium. “We don’t know why Twitter disables some accounts, but not others,” Syed added. “They have some general principles, they have broad guidelines, but they don’t tell us how they’re applied.”

The problem isn’t that Vladimir Putin parody accounts have been suspended, but rather the lack of clarity about guidelines for doing so—“the veneer of neutrality,” as Syed put it. Syed and Smith argued that platforms like Facebook, Twitter, and Snapchat should more proactively and transparently outline how they handle online harassment and inflammatory speech. “A lot of people getting banned from Twitter for that sort of thing deserve to be,” Smith said. “But I would like to know who and why.”

A ban on journalists or news organizations is the implied fear at the bottom of this potentially slippery slope. When a journalist is banned from Twitter, for example, she may effectively disappear to much of her audience.

Sign up for CJR's daily email

It was a welcome call to action from an organization that exemplifies many of the changes that have gripped the media industry in recent years. It’s not clear, however, how much publishers can actually effect change by platforms, given what appear to be increasingly lopsided power dynamics. With media companies building or rebuilding their business models around social distribution, there’s little impetus to play hardball.

“I actually don’t think the best thing is to set up an antagonistic relationship between [platforms and publishers],” Syed said, “not for business reasons and not for the purposes that we’re after in terms of transparency.”

The  business imperative to not rock the boat is indeed strong. As social networks have grown more advanced and far-reaching, publishers have come to rely on them to distribute journalism. There are measureable benefits to this shift. Twitter brings journalists within keystrokes of their audience, for example, while Facebook’s Instant Articles and live video capabilities send content directly to users’ feeds. Such tools give old and new media alike potentially global reach with each story. That power cannot be overstated.

Facebook and its counterparts can meanwhile keep users’ feeds populated with new content while bearing little responsibility for its contents. Section 230 of the Communications Decency Act puts legal liability on publishers—from individual users to The New York Times—and it has allowed platforms to become complacent about their own internal guidelines for regulating speech.

“To the extent they’re reacting somewhat randomly, they’re not really incentivized to do anything more than that,” said Stuart Karle, a Columbia Journalism School media law professor who also took part in Friday’s panel. Right now, Smith remarked, these platforms are “operating on instinct.”

Syed and Smith both noted the advantages of strategic approaches, such as limiting the reach of hateful speech with tactics like shadowbanning. “You don’t let trolls get the satisfaction of knowing they’re banned,” Smith said.

But they stopped short of calling for legal changes. Syed argued that they would stifle new platforms from growing and competing with existing giants. She did point to the threat of government intervention, particularly in countries with less robust free speech laws, as a potential catalyst for change. Regulation is coming down the road—as it did for Google in the EU with the “right to be forgotten”—and companies should prepare ahead of time, rather than waiting for the axe to fall.

“You’re already under some scrutiny here,” Syed said of platforms. “You’re likely to be under extreme scrutiny elsewhere. So why don’t we set up a system that makes you transparent, that makes you accountable, that lets you put forth these principles and applications in a way that diffuses the situation for you?”

“We just want to add [BuzzFeed’s] voice to that potential pressure,” she added.

Smith, who deserves plaudits for building out a world-class newsroom at BuzzFeed, believes media companies have untapped leverage in this emerging relationship. Smith sympathizes with social platforms in many regards. But he also believes their complex and opaque internal processes—from producing algorithms to governing speech—should be more aggressively covered by news organizations.

“The platforms are making these decisions every day that they’d rather you not write about,” he said. “They’re not operating under a very clear set of rules, and they’re taking advantage of a lack of scrutiny and ambiguity. And those are often the best stories.”

Gizmodo’s scoop that Facebook editors were manually manipulating the platform’s “Trending” tab is one example of aggressive media coverage spurring change. That story lent credence to the idea that platforms are behaving more like publishers with each passing year. Which is to say: The ambiguity is only growing, and the time to exert pressure continues to slip away.

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

David Uberti is a CJR staff writer and senior Delacorte fellow. Follow him on Twitter @DavidUberti.