Back, then, to the Post-Dispatch. The paper erred by calling the school to complain about its rogue commenter. But they also erred in creating an environment that encouraged rogue commenting in the first place. The invitation to name the exotic foods readers had eaten was a weak one—one that smacked of token rather than true engagement. (As Marshall writes, “The gift requests acknowledgment, but cannot demand return from anyone.”) The prompt wasn’t a gift so much as a command—“this is how we will interact today.” It wasn’t really an invitation to comment or interact in any meaningful way; the site offered nothing to indicate that anybody at the Post-Dispatch was reading or compiling the responses or otherwise planning to take them seriously.
The paper, in other words, offered no real sense of ownership of its comment sections, no sense that anybody was curating them or otherwise invested in their health. (Many of the responses noted that other Post-Dispatch articles were dominated by racist comments that nobody had seen fit to moderate or remove; by Greenbaum’s own admission, the “Talk of the Day” column had become the sort of feature where “every topic devolves into a partisan screed. Democrats and Republicans, conservatives and liberals — all roads lead to the same tired, boring arguments.”)
One week before the “female anatomy” incident, after soliciting suggestions on how to fix the “Talk of the Day” feature, Greenbaum told readers he was “going to be more ruthless about deleting comments that are off-topic and accomplish nothing more than lobbing flames into the marketplace of ideas.” Cybermind took a different approach toward putting out fires. From time to time, Marshall points out, Cybermind participants persisted in giving poisoned gifts—messages that were tonally or thematically inappropriate or otherwise disruptive. Although Cybermind was run by a man named Alan Sondheim, who had the power to unsubscribe any poster, in cases like these Sondheim inevitably subsumed his authority to that of the community. Sondheim gave the participant enough time to demonstrate to the community that he was acting in bad faith, and for the community to comment on the participant and come to some sort of consensus about him. Only then did Sondheim push the kill button.
To be sure, it’s important that Sondheim could push that button. Communities need to have people with that power, or else, as Clay Shirky has written, the community risks becoming its own worst enemy. But it’s also important that unilateral action wasn’t Sondheim’s first choice.
Many online social spaces take their orienting principles from real-world communities. They presume the existence of a sort of social contract that establishes normative community manners and behavior and punishes disjunctive behavior. But unlike in formal states, where the rules of society are codified and enforced by law, informal societies are less hierarchical. Marshall writes: “The term ‘social contract’ suggests that society is a voluntary and deliberate compact, that there is one social interest, and that society members agree to the power structures, to their ‘place’ and so forth . Most online ‘social contract’ is impermanent and continually renegotiable, involving variable parties and different levels of agreement. The difference between this kind of ‘contract’ and the agreements normally referred to by that term is too significant to ignore.”
It’s true. Online communities don’t generally resemble small towns so much as hobo jungles. Members pick up and leave without warning. New users arrive knowing and caring nothing about the community’s history. Participants have different levels of commitment to the community, and different ideas of what they expect to get in return for participating. Social contracts don’t work online because nobody is compelled, in any meaningful sense, to comply with them.