The ‘huge issue’ with identifying original content from media outlets

The Washington Post’s David Fahrenthold recently broke the news of a series of fake 2009 Time magazine covers featuring Donald Trump, framed copies of which were hanging in several of Trump’s golf clubs. Comparisons with real Time covers revealed visual flaws: the fakes’ red border is too thin, and the bar code came from an online Photoshop tutorial. The biggest tell, however, may have been the fake’s publication date of March 1, 2009. Time magazine is dated the Monday of each week; March 1, 2009 was a Sunday.

As Fahrenthold’s investigation demonstrates, fakes in the world of print publishing are relatively easy to discredit if not always to spot. Inconsistencies, errors, and—perhaps most importantly—widely accessible originals, make the job of persuasively forging content a significant challenge.

In digital publishing, however, even maintaining original content cannot be taken for granted. As Adrienne LaFrance detailed at The Atlantic in 2015, content that is “born digital” is at particular risk, especially when news organizations close their doors. The impermanence of digital content makes it difficult to assert (or refute) the legitimacy of an online article.

TRENDING: ‘Thank God the brain cancer waited for the Pulitzer’

Imagine, for example, the following scenario: A prominent political figure claims on social media that a news organization has published lies about his past, and provides corroborating screenshots of the offending article. The news organization responds that the screenshot is doctored, and provides a link to the original piece. The politician, in turn, asserts that the article has been altered since his screenshot was taken.

The challenge for news organizations is to offer readers a way to verify the authenticity of digital content. Luckily, the cryptographic community solved the problem of authenticating digital artifacts long ago.

Being able to identify genuine digital material is “a huge issue,” says David Schulz, a prominent First Amendment lawyer and director of the Media Freedom & Information Access Clinic at Yale Law School. Given the increasing sophistication of fake digital content, Schulz asks, “How do you prove something was ‘out there’? How do you prove it’s authentic?”

Sign up for CJR's daily email

Independent Web archives like the “Wayback Machine” favor front pages, not full articles. And while the Library of Congress collects and permanently stores, for example, the top 100 printed newspapers by US circulation, it currently archives digital publications only “selectively”—meaning most digital news isn’t archived at all.

The challenge for news organizations, then, is to offer readers a way to verify the authenticity of digital content. While strategies like publishing on HTTPS can help ensure the integrity of a piece as it’s being viewed, it cannot prove the content has never been changed.

Luckily, the cryptographic community solved the problem of authenticating digital artifacts long ago. The field of cryptography is naturally concerned with security—which includes the ability to detect message-tampering, for example.

ICYMI: Two dozen freelance journalists told CJR the best outlets to pitch

Existing cryptographic methods—many of which have been in use for decades—can be applied to any type of digital content, from a text story to podcasts. Using a combination of the author’s digital identity, the current date and time, and the digital content itself, an algorithm computes a unique block of data that functions like a digital notary stamp. Such digital signatures are how your computer or phone identifies “trusted developers” when you download a program or app; they are the same “signatures” provided by encrypted email software. In general, as long as standard algorithms are used to produce the signatures, they can be automatically verified by any computer.

Deploying such a system, of course, would require substantially rethinking digital publishing practices. The content of articles would need to be isolated from the infrastructure of the page (such as ads, formatting, and tracker scripts) before signing. If implemented well, however, the same kind of automated validation that supports HTTPS could confirm the author, the timestamp, and the substance of online content—without so much as the click of a button.

Creating definitive versions of digital content would also increase transparency and accountability for readers. Because the date on the signature of the article would change with every update, ghost corrections would be nearly impossible. This month, for example, a Spin magazine feature suggested that MTV News had altered online posts in response to industry pressure—but without notifying readers. At the moment, neither readers nor journalists can prove this retrospectively.

If reputable news organizations digitally sign their content, it would increase the difficulty of creating fake news by raising doubts about material that isn’t similarly authenticated. Likewise, developing such a standard would support archives that can withstand attempts to reauthor history. Although the technical challenge is substantial, it’s also one that is important to take on, as new organizations strive to rebuild public trust and set their work apart from that of content mills and fake news factories.

ICYMI: Eight simple rules for doing accurate journalism

Has America ever needed a media watchdog more than now? Help us by joining CJR today.

Susan McGregor is Assistant Director of the Tow Center for Digital Journalism and Assistant Professor at Columbia School of Journalism.