news literacy

How to check if that viral video is true

Journalists don't always verify user-generated content, so readers need to learn how to verify what they see online
July 18, 2014

When big news breaks, like the Malaysian airline plane crash in eastern Ukraine on Thursday afternoon, news organizations can’t say much that’s not confirmed. So they, along with hungry news consumers, turn to user-generated content for answers, such as these first two crash-related posts on verified UGC feed FB Newswire: a photo of extra security at Schiphol Airport in Amsterdam and a post from a Dutch passenger as he was boarding the flight, both found and verified on Facebook. But verified items on social media are rare occurrences. See, for example, this Vox.com discussion of an item posted on a social media account possibly belonging to eastern Ukrainian rebel commander Igor Strelkov. Incredibly of-the-moment, but possibly untrue.

Viral content has become an internet staple, and the basis of successful startups like Upworthy and Viral Nova. Be it thanks to a company who traffics in giving things reach or through more organic user sharing, the origin and validity of most pieces of viral content are never verified. That is, nobody ever checks to see if they are what they claim to be. Mostly, this is because the entire process–from, say, discovering this video of a cat saving a boy from a dog to sharing it–takes place outside the news cycle. But because journalists cover entertainment and digital culture, such viral content becomes something they cover, or share, too.

“Viral content is now pretty much newsworthy across the board,” says Mike Skogmo, director of communications at Jukin Media, a company that collects and licenses viral video content in an online database that producers can then use for broadcast. “If something gets millions of views very, very quickly on YouTube, chances are a lot of the mainstream news are going to pay attention to it, because it’s what people are talking about.” While Jukin has a 10-member research team to verify the origin and owner of the video, Skogmo admits that when it comes to virality, accuracy is not always perfect.

A recent Tow Center Report supports his assertion. The report, studying TV and online newsrooms’ use of user-generated content, found that news organizations often use UGC but are poor at crediting its source, which has led to news organizations getting facts wrong even as news consumers become accustomed to seeing YouTube and Twitter content of questionable origin on the news, which has historically been viewed as a trustworthy information source.

There may be little incentive to verify online content–if the error is big enough, one school of thought goes, it will self-correct. But as UGC becomes a common content source, there is an increasing push for journalists to investigate it before using it, and a host of projects have emerged recently to help them do so. These include Amnesty International’s recently launched Citizen Evidence Lab, which provides journalists and human-rights advocates with tools for verifying user-generated video, the BBC’s Verification Hub, the Verification Handbook (freely available online), recent Knight News Challenge winner Checkdesk, and new online efforts by citizen journalists to factcheck news.

These resources, however, are targeted toward journalists rather than the reading public, who is constantly inundated by viral content with few resources at hand to check the veracity of what’s before them. In this space, UGC verification becomes a news literacy issue. But convincing readers to check before they share likely requires making the former as easy as the latter.

Sign up for CJR's daily email

In Stony Brook University’s news literacy course for undergraduates, Lesson 12, called “Analyzing Social Media,” teaches students the V.I.A. method, a three-step check in which students attempt to Verify a news item by looking for creation dates and multiple, reputable sources, and evaluate the organization providing the information as Independent and Authentic by looking up background and social media connections.

The method reads like a scaled-down version of the processes acclaimed social media news agency Storyful uses in its own verification work, which it provides as a service for newsrooms. When staffers there come across a piece of UGC, they begin with skepticism and start building a story from there.

“We always start thinking from a point of: This actually could be from last year,” explained Managing Editor Aine Kerr, speaking of recent footage of a strike in Gaza. “It could be Syria. It might not even be Gaza. It might not be today. There could be photoshopping going on.”

To that end, Storyful runs the hashtag #DailyDebunk, which showcases the type of verification they do on a regular basis for newsrooms. “People are used to reading stories that obviously have been verified and are authentic and legitimate, which is great,” says Kerr, “but particularly in the world of trends and viral, the amount of hoaxes and fakery is vast.”

During the World Cup, for example, an image of a woman in a dumpster in Brazil went viral, but after Storyful editors ran it through their verification process, they discovered it was a year old. Images that were purported to be riots from São Paulo were similarly debunked upon reverse image search. “Within minutes we’re able to go to our clients, but crucially to the public, and say ‘These are old images, do not share them,’ ” explained Kerr.

Other spaces for similar public engagement include the Daily Debunk Open Newsroom, and Mohyla School of Journalism’s StopFake.org, which battles fake information about Ukraine. Yet despite being available online, few are truly public facing. Like the verification sites, most all of these UGC verification projects exist in or serve newsrooms, citizen journalists, activists, or journalism schools. And for the public that falls outside of these communities–and that makes much of this content go viral by viewing and sharing it–the issue is twofold: Exposure to verification tools is zero to none, and social reward of sharing online far outweighs the incentive to verify.

That said, motivating news readers to verify might involve making how-to guides and tools more obviously available, rather than just targeting journalists, perhaps by hosting them right on Facebook, Twitter, or Instagram, and encouraging the use of these very platforms to perform verification checks. Simple tricks that Storyful uses, for example, are putting images found on social media through reverse image search engine Tin Eye, or, looking up post history on Twitter to make sure someone who says they are somewhere has actually been posting from that place.

“One of the things that people love being able to do,” says Kerr, “is go out and say, ‘Actually, you’re wrong, and here’s how I know.’ ” That’s the psychology behind #DailyDebunk, which takes place directly on Twitter and plays on all the best parts of online sharing.

In this vein, one of the few debunkers with potential to be a model for user verification is the collaboratively created FB Newswire. It’s a stream of eyewitness news content that Facebook users around the world have uploaded to the platform, which is then found, verified, and shared by Storyful’s curation team. That is, it’s share-ready content that’s been pre-vetted so readers can be sure they are pushing something accurate out to their social networks.

Earlier this week, for example, Storyful posted this item on FB Newswire of an Israeli strike at a port in Gaza City with a link to the uploader’s Facebook profile. It was the same item that clients who subscribe to Storyful’s service for newsrooms received through their dashboard, but with fewer details on the verification process: no screengrabs of corroborating evidence, reports from other international journalists at the scene, geo-location data, or information on the uploader. In this form, content doesn’t feel like it was made for journalists, but rather, news readers. Yet at the same time, it encourages a habit of verification by exposing readers to it in an easy, organic way. As with the news of the crash of MH17, if readers knew to go to FB Newswire, they’d be receiving (and possibly sharing) reliable, user-generated information.

Couple this with the tools created for use by journalists (such as Storyful’s how-to guides on social newsgathering and verification and case studies of its own work), and it makes an entire news literacy lesson, complete with methodology and examples conveniently located right there by the share buttons.

Funding for this coverage is provided by the Robert R. McCormick Foundation.

Jihii Jolly is a freelance journalist and video producer in New York City