In Defense of Wikipedia

At the end of the day, Wikipedia looks less like the reputation-munching monster it's being portrayed as, and more like the future of information in the Internet age.

Reading the news lately, you may have been left with the impression that Wikipedia, the collaborative online encyclopedia, is an information-age monster, destroying reputations like Godzilla batting away fighter jets. But if that’s all you’ve heard, your impression is gravely mistaken.

The controversy began two weeks ago when John Seigenthaler, a former journalist and aide to Robert Kennedy, penned an op-ed in USA Today decrying Wikipedia for posting a maliciously inaccurate biography of him suggesting he had something to do with the assassination of both Kennedy brothers. Writing of his difficulty in tracking down the anonymous author of the biography (who this week apologized), Seigenthaler concluded, “I am interested in letting many people know that Wikipedia is a flawed and irresponsible research tool.”

Meanwhile, a Web site is soliciting clients for a class action lawsuit against Wikimedia, the organization that hosts Wikipedia, “to change its current practices that permit anyone to post content to their website, without formal attribution and without recourse back to Wikimedia Foundation and or the author of the content” and “Recover substantial monetary damages, on behalf of those who have suffered as a direct result of Wikimedia’s flawed business model.”

And media organizations have begun to pile on with the sort of schadenfreude they reserve for other media outlets who have made a mistake. As Carolyn Said wrote in a front-page article for the San Francisco Chronicle, “critics say Wikipedia leaves the door open for anyone who wants to rewrite history, whether it’s your neighbor with a grudge, a nut job floating a conspiracy theory or someone repeating an urban legend. As with other Web sources such as blogs, its accuracy can be hard to judge.”

Fair enough, but one could just as easily substitute the word “publications” for “Web sources” and the word “newspapers” for blogs in the previous sentence and it would still hold up. In short, this is a tempest in a teacup. Wikipedia is actually a good thing — maybe not for media organizations and other with business models based on proprietary content, but for the public at large.

Most of the controversy stems from a misunderstanding of what Wikipedia is, and the nature of the Internet in general. Anyone expressing outrage over false and misleading information being posted online (You mean, people post things that aren’t true on Web sites?) must not have been loading up their Web browsers for the last decade, let alone pointing them to the Drudge Report, bloggers with ideological axes to grind, or any of the innumerable semi-professional online conspiracy theorists.

Part of the argument against Wikipedia rests on the idea that users aren’t able to assess the credibility of the information they’re reading. In truth, however, Internet users are getting smarter about figuring out whether to believe information they find online (or, for that matter, in major news outlets). Google is a big part of this trend. The search engine produces results based on how many sites link to a given page; the more links to a page, the higher the result. Those links are generated by human beings, who are presumably doing so because they think the information they’re linking to is credible (or, at the very least, interesting).

Try dropping “Swiffer Wetjet” into Google, for example. A rumor last year had it that the product, a floor cleaning system, was harmful to household pets. But the first Google results are pages debunking the myth, not propagating it. In other words, the more credible information has risen to the top,

The very nature of the Internet — the ability to link to sources readers can evaluate for themselves, the ability to quickly read multiple sources of information about the same subject — tends to make online readers much more critical consumers of information (witness the proliferation of blogs, like this one, devoted entirely to evaluating the news media).

Thus, the damage wrought by any single untruth temporarily floating around in the vast ocean of Wikipedia isn’t likely to last long. Certainly, the Seigenthaler episode is a reminder that such information can be inaccurate, and he’s right to complain about other sites scraping up Wikipedia content and presenting it as their own. But the general trend is away from the proliferation of such rumors.

More than that, however, what’s been obscured in much of the coverage is that the Wikipedia model actually works. The English-language Wikipedia boasts over 800,000 articles, all generated for free, and freely available to anyone with Internet access. That’s a tremendous amount of information being made far more accessible to far more people than print or broadcast can reach.

Wikipedia’s content itself is, in general, quite good. Most of the entries are quick overviews synthesizing information from other sources. If you’re looking for basic factual information about, say, zebras, or coffee or even Columbia University, Wikipedia is a good place to start. Moreover, a number of experts contribute to Wikipedia (particularly when it comes to technology), making its longer articles on more esoteric topics especially useful. (See the entry on DNS, the domain name system underlying the Internet, for just one example.) And a survey published this week by the British scientific journal Nature found that Wikipedia was nearly as accurate as Encyclopedia Brittanica about scientific information.

The fact that everyone can produce and edit entries on Wikipedia is, generally speaking, a great virtue. Entries tend to be edited and refined iteratively, not unlike a breaking news story that develops and merits more articles after the initial report. The effect is a gigantic collective data download from the brains of contributors. There are always, as anyone with a Web site that includes a comments section can tell you, a couple of idiots who want to ruin things for everyone — but, all in all, those are surprisingly few and far between. And, increasingly, deletion of false information occurs swiftly at Wikipedia. If, say, Fortune magazine prints an error, that error is out there on newsstands for two full weeks before a new edition of Fortune comes along. Wikipedia’s users don’t have to wait two weeks to clean up someone else’s mess; they can do it in two minutes, once they spot it.

Then there is the question of responsibility for the content itself. While Wikipedia is published and overseen by the Wikimedia Foundation (and by Jimmy Wales, the founder of the site), it relies on unpaid, usually anonymous, contributors and editors. (Tracking down Seigenthaler’s libeler turned out to be onerous — though not impossible — because of the relative lack of data about users.) Wikipedia does make available the edit history of most entries (you can see, for example, that the zebras entry has been edited a few dozen times), but it doesn’t require verification of a user’s identity (a not-uncommon practice on the Web).

The downside of that system is obvious in episodes like the Seigenthaler case. But there’s a practical reason to do it Wikipedia’s way, as well — letting people anonymously post and edit entries generates the maximum amount of content for the least amount of overhead on the part of contributors and editors. That’s the nature of distributed enterprises like Wikipedia: the more barriers you erect to producing content, the less of it you’re likely to get. And the anonymous nature of Wikipedia’s editors also emphasizes that the project is collaborative — no one gets ownership over any single entry, which encourages people to contribute as they see fit.

In many ways, Wikipedia is actually a better model for generating information than the traditional author/editor/publisher model. For one, as noted earlier, it’s far easier to fix things when they’re wrong. Try extracting a correction from a major news outlet, and see how far you get. Think of the recent controversy over an alleged bump of a rescue worker by Geraldo Rivera reported in the New York Times, which didn’t issue a correction until the paper’s public editor finally shamed them into it. One thing you can always do on Wikipedia that you can’t do when the Times prints a story about you that’s inaccurate or unfair: hit the “edit” button.

And the impact of mistakes that do make their way into Wikipedia is nothing compared to the impact of mistakes in major news outlets. Think weapons of mass destruction, or the CBS MemoGate scandal, or Jayson Blair, or Stephen Glass, or Janet Cook — to say nothing of the day-to-day errors and inaccuracies that we get paid to pick on here at CJR Daily.

The truth being obscured by the Seigenthaler case is that Wikipedia provides an outstanding model for producing a large body of useful content for almost zero cost — and because of the low cost, there’s no need to charge anyone for access to it.

At the end of the day, Wikipedia looks less like a reputation-munching monster, and more like the future of information in the Internet age.

Correction: The above has been changed to note that it there is no law firm backing the Web site soliciting members for a class action suit against Wikipedia.

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Bryan Keefer was CJR Daily’s deputy managing editor.