Last winter, Amazon Web Services received some negative attention after it dropped WikiLeaks materials from its servers, and WikiLeaks associates accused Amazon of violating their First Amendment rights. But Berkman Center senior researcher Ethan Zuckerman argued in a conversation with CJR that “rights” had nothing to do with it. Whether or not pressure from the U.S. government played a part in the corporate decision, it was, ultimately, a corporate decision. Amazon’s customers don’t have “rights” in the same way that citizens do, beyond the terms of service agreement that no one really reads before checking the “I Agree” box.

Zuckerman was not defending Amazon’s move; rather, he was reminding Amazon’s critics of a point that is very easy to forget, in a time when the Internet is increasingly becoming our society’s “public square.” As he put it:

What’s really hard about this is that we perceive the web to be a public space, a place where you should be able to go and set up your soapbox and say whatever you want to say to the world. The truth is, the web is almost entirely privately held. So what happens here is that we have a normative understanding that we should treat this like public space—that you should have rights to speak, that no one should constrain your rights—but then you discover that, basically, you’re holding a political rally in a shopping mall. This is commercial speech, controlled by commercial rules.

He later said that the U.S. government should, in fact, explore policy that would “obligate Internet service providers to protect speech in a way that recognizes that it functions as public speech.” But for now, each company seems to be on its own with these kinds of decisions.

Several stories have come out in the past few days that address this intersection between corporate policy and personal rights, highlighting the growing pressure on heads of certain companies—especially technology and communications companies—to take positions they have previously sought to avoid.

A piece by Jennifer Preston in The New York Times on Sunday discussed the ever-pressing “ethical quandary” that social networking sites face as it becomes harder for them to remain politically neutral. As activists throughout the world increasingly rely on networks like Flickr, Facebook, Twitter, and YouTube to communicate with each other and organize political movements, the heads of these companies must decide how they will respond to that growing responsibility.

Often, policies originally established to maintain certain standards of quality on the sites conflict with the needs of the activists who depend on the networks. For instance, an Egyptian blogger Preston spoke with said he thought his Flickr account was being hacked when photographs of members of the country’s state security force he had posted abruptly disappeared from the site; in fact, they were removed because Flickr has a rule prohibiting people from posting photos they did not take themselves. (This was the same reason Amazon gave for removing WikiLeaks from its servers—that the organization did not own the rights to the documents they were hosting.) And an independent journalist in China had his Facebook profile deactivated because, wanting to evade retribution from government censors, he had not used his real name to create the page.

Of course, as Evgeny Morozov often reminds us, despite the irresistible narrative about the liberating potential of technology, activists fighting for freedom and democracy are not the only people who know how to use the Internet for a cause. Populist activists in the Middle East can create Facebook groups to organize street protests; security forces of those countries’ regimes can monitor those pages to cull information and target counter-attacks and future arrests. Preston writes:

One challenge is whether a company should maintain its commitment to remain neutral about content, even when politicized content could offend users or even put people in danger…. For instance, what would the company do if a group that opposes abortion wanted to post photographs of doctors who perform abortions?

It’s a question as old as the web: is an online service provider responsible for the content it hosts? Craigslist (in theory) can be used just as easily to advertise the services of underage prostitutes as it can to sell your old bookshelf. Most websites with any kind of user-generated content, from open message boards to the comment sections of news sites, reserve the right to remove “offensive” content. But do the administrators of those sites have a responsibility to monitor all of it, all the time? And who gets to decide what is “offensive,” anyway? What if it’s an autocratic government doing the deciding?

Lauren Kirchner is a freelance writer covering digital security for CJR. Find her on Twitter at @lkirchner