When big news breaks, like the Malaysian airline plane crash in eastern Ukraine on Thursday afternoon, news organizations can’t say much that’s not confirmed. So they, along with hungry news consumers, turn to user-generated content for answers, such as these first two crash-related posts on verified UGC feed FB Newswire: a photo of extra security at Schiphol Airport in Amsterdam and a post from a Dutch passenger as he was boarding the flight, both found and verified on Facebook. But verified items on social media are rare occurrences. See, for example, this Vox.com discussion of an item posted on a social media account possibly belonging to eastern Ukrainian rebel commander Igor Strelkov. Incredibly of-the-moment, but possibly untrue.

Viral content has become an internet staple, and the basis of successful startups like Upworthy and Viral Nova. Be it thanks to a company who traffics in giving things reach or through more organic user sharing, the origin and validity of most pieces of viral content are never verified. That is, nobody ever checks to see if they are what they claim to be. Mostly, this is because the entire process—from, say, discovering this video of a cat saving a boy from a dog to sharing it—takes place outside the news cycle. But because journalists cover entertainment and digital culture, such viral content becomes something they cover, or share, too.

“Viral content is now pretty much newsworthy across the board,” says Mike Skogmo, director of communications at Jukin Media, a company that collects and licenses viral video content in an online database that producers can then use for broadcast. “If something gets millions of views very, very quickly on YouTube, chances are a lot of the mainstream news are going to pay attention to it, because it’s what people are talking about.” While Jukin has a 10-member research team to verify the origin and owner of the video, Skogmo admits that when it comes to virality, accuracy is not always perfect.

A recent Tow Center Report supports his assertion. The report, studying TV and online newsrooms’ use of user-generated content, found that news organizations often use UGC but are poor at crediting its source, which has led to news organizations getting facts wrong even as news consumers become accustomed to seeing YouTube and Twitter content of questionable origin on the news, which has historically been viewed as a trustworthy information source.

There may be little incentive to verify online content—if the error is big enough, one school of thought goes, it will self-correct. But as UGC becomes a common content source, there is an increasing push for journalists to investigate it before using it, and a host of projects have emerged recently to help them do so. These include Amnesty International’s recently launched Citizen Evidence Lab, which provides journalists and human-rights advocates with tools for verifying user-generated video, the BBC’s Verification Hub, the Verification Handbook (freely available online), recent Knight News Challenge winner Checkdesk, and new online efforts by citizen journalists to factcheck news.

These resources, however, are targeted toward journalists rather than the reading public, who is constantly inundated by viral content with few resources at hand to check the veracity of what’s before them. In this space, UGC verification becomes a news literacy issue. But convincing readers to check before they share likely requires making the former as easy as the latter.

In Stony Brook University’s news literacy course for undergraduates, Lesson 12, called “Analyzing Social Media,” teaches students the V.I.A. method, a three-step check in which students attempt to Verify a news item by looking for creation dates and multiple, reputable sources, and evaluate the organization providing the information as Independent and Authentic by looking up background and social media connections.

The method reads like a scaled-down version of the processes acclaimed social media news agency Storyful uses in its own verification work, which it provides as a service for newsrooms. When staffers there come across a piece of UGC, they begin with skepticism and start building a story from there.

“We always start thinking from a point of: This actually could be from last year,” explained Managing Editor Aine Kerr, speaking of recent footage of a strike in Gaza. “It could be Syria. It might not even be Gaza. It might not be today. There could be photoshopping going on.”

To that end, Storyful runs the hashtag #DailyDebunk, which showcases the type of verification they do on a regular basis for newsrooms. “People are used to reading stories that obviously have been verified and are authentic and legitimate, which is great,” says Kerr, “but particularly in the world of trends and viral, the amount of hoaxes and fakery is vast.”

Jihii Jolly is a freelance journalist and video producer in New York City