Julian Assange is upset with The New York Times for talking with the White House about WikiLeaks’s trove of Afghanistan documents prior to publication. Really, though, he should bite his tongue. The Times’s decision to check with the White House was of great service to WikiLeaks, because it was one of several processes that served to remove any doubts about the authenticity of the Afghanistan documents.
People continue to debate the ethics, legality, and motivations of what WikiLeaks has done. But few, if any, are questioning the origin and accuracy of the documents. People seem to agree they are, in fact, secret military documents. Assange expertly removed accuracy and verification from the conversation by placing the burden for these elements on the shoulders of The New York Times, The Guardian and Der Spiegel. The Times, in turn, placed some of that burden on White House.
David Carr of the Times labeled the “War Logs” operation “asymmetrical journalism.” But perhaps asymmetrical journalism is only possible—or best enabled—when accompanied by distributed verification, which is the best way to engineer trust in today’s information environment.
Just as the Internet itself is built with a distributed architecture, the most powerful way to deploy verification in a networked world is via a distributed process that uses multiple nodes, each of which have a certain level of reliability. In this example, the nodes were the Times, Guardian, Der Spiegel, the White House, and WikiLeaks itself, among other, smaller players.
If WikiLeaks had released the documents on its own, the initial debate and coverage would have focused on whether the material was real, thus delaying any discussion about the contents. By enabling a process of distributed verification, Assange was able to ensure the conversation about the documents moved immediately to the information they contained, not whether they were authentic.
“WikiLeaks was soaking, drowning in data,” Clay Shirky, the author of Cognitive Surplus, told Carr. “What they needed was someone who could tell a story. They needed someone who could bring accuracy and political context to what was being revealed.”
Going back to the moment when they were handed the data, this was an unprecedented verification challenge for the three news organizations. There was no conceivable way for them to read, let alone verify, all of the documents prior to publication. And no one person had the ability to answer questions or fully interpret the data.
I suspect this kind of challenge—dealing with an abundance of data that offer multiple narratives and potential interpretations—will become more commonplace for news organizations, thanks to the movement towards open data and the ability to digitize and archive huge quantities of information. (Let’s just hope the resulting abundance of data comes in structured form.) The discipline of verification will have to evolve to meet this new challenge. Based on this example, the future of verification seems to be more open, distributed and collaborative processes that mix human intelligence and expertise with machine-based analysis and assistance. It’s a mouthful, but it’s also what seems to have happened here.
Some details about the verification process used for this story have begun to trickle out. Clint Hendler’s fascinating account for CJR about how this arrangement and the resulting editorial packages came together provided a glimpse of Der Spiegel’s approach to fact checking:
… reporters from the three outlets sat down and divvied up some tasks. Der Spiegel offered to check the logs against incident reports submitted by the German army to their parliament—partly as story research, partly to check their authenticity—and to share their findings. Davies, Goetz, Leigh, and Schmitt brainstormed about fifteen topic areas for which The New York Times’s computer assisted reporting team would try to find relevant logs to be shared with the group. Der Spiegel and The Guardian did their own searching, and also shared fruitful results, search terms, and methods.