Whether you view it as long overdue or just in time, I believe we are starting to see the emergence of best practices for verifying social media content and citizen reports. Recent weeks and months have seen leading practitioners of social media verification and crowdsourced verification share tips and thoughts to help move the discipline forward.
Below is a summary of what I’ve collected to date. I’ve teased out the core details from longer blog posts and columns, and encourage you to click through and read the full text of each piece I’ve excerpted.
The most recent bit of online verification advice came in the form of a blog post from Mark Little, the founder of curated news startup Storyful. Much like Storify, Storyful offers a tool that enables you to build a story using tweets, video and other online elements. The difference is that Storyful also employs curators to use the tool to report stories from all over the world.
Little’s post is a fascinating addition to the growing body of writing about news curation and online verification.
He went into detail about the value of the “human algorithm”—applying people to the problem. And he’s not just talking about the curators employed by Storyful.
“Every news event in the age of social media creates more than a conversation, it creates a community,” he wrote. “When news breaks, a self-selecting network gathers to talk about the story. Some are witnesses - the creators of original content - others are amplifiers - passing that content on to a wider audience. And in every group are the filters, the people who everyone else looks to for judgement.”
Here are his tips for verifying user-generated video:
- Review of the uploader’s history and location to see whether he/she has shared useful and credible content in the past, or if he/she is a “scraper”, passing other people’s content off a their own (location is a big clue: don’t trust uploaders in Japan to post video from Syria).
- Use of Google street view/maps/satellite imagery to help verify the locations in a video.
- Consultation of other news sources or validated user content to conﬁrm events in a video happened as they were described.
- Examination of key features in a video such as weather and background landscape to see if they match known facts on the ground.
- Translation of every word that comes with a video for additional context.
- Monitoring social media traffic to see who is sharing the content and what questions are being asked about it.
- Develop and maintain relationships with people within the community around the story.
In conjunction with the recent BBC Social Media Summit, a reporter with the user-generated content “Hub” in the BBC Newsroom in London published a blog post about the ways that group works to verify video content. Note how some of these tips align with Little’s offering:
Referencing locations against maps and existing images from, in particular, geo-located ones. Working with our colleagues in BBC Arabic and BBC Monitoring to ascertain that accents and language are correct for the location. Searching for the original source of the upload/sequences as an indicator of date. Examining weather reports and shadows to confirm that the conditions shown fit with the claimed date and time. Maintaining lists of previously verified material to act as reference for colleagues covering the stories. Checking weaponry, vehicles and licence plates against those known for the given country.
I recently wrote about how NPR’s senior strategist engages in real-time verification and curation to track events in the Middle East. Carvin’s work has been celebrated widely, and there are lots of lessons for journalists in the way he has cultivated sources on Twitter and elsewhere and transformed his Twitter account into an invaluable newswire.
One of the biggest verification lessons from Carvin relates to the human algorithm idea put forward by Little. One thing that is often overlooked by people who write about Carvin is that he interacts directly with sources via Skype and e-mail and other means to gather information. He does a lot of old school verification and talking to sources, which cannot be underestimated. He also, of course, practices a form of crowdsourced verification. If he sees a tweet that reports something of a newsworthy nature, he will often quote that tweet and add a request for his followers to help him verify if it’s correct, as in this examplehere. Carvin also published an online case study of how he and his followers debunked a claim that Israeli munitions were being used in Libya. It should be required reading in every newsroom and journalism school.
The point is that as much as there are tools and techniques that can be used for verification of social media content, conversation and interaction are in many cases the best ways to move towards verification. Get in touch. Ask questions. Interact. Learn about the source as well as the information.
Here are a few other tips I teased out of Carvin during our conversation:
- Independent verification: “In the case of what we did for the so-called Israeli weapons, I had a lot of people that were giving me essentially the same information and they didn’t really know each other,” he told me. (You can also read the tips from Ushahidi below to see how the people a person does know can also be an effective tool for verifying information.)
- Beware of non-journalists using news terms: “Some of the rumors I see floating around seem to be accompanied by the words ‘breaking’ or ‘confirmed’ or ‘urgent’ all in capital letters,” he said. “I think it’s partially because you’ve got people on the ground in the Middle East hearing information and they’ve very excited about getting it, or feel like it needs to be out there as quickly as possible. They start using phrases that reporters use but they are using them in a very different way.” When Carvin sees people doing this on Twitter he will often reply to ask for additional details, or for them to provide a photo or video to help with confirmation.
- Test and learn about your network: Carvin told me that over time he has been able to determine which of his sources and followers have different skills. This is possible because he is constantly asking them to help him with translation, to identify objects or a location in a photo, to share details from on the ground etc. The more he asks and tests his network, the more he sees where they individually and collectively can be of help. It takes time and dedication to do this, but the payoff, at least in terms of what he is doing, is obvious. “I engage with them as best as possible and try to get a sense if they know what they’re talking about,” Carvin said. “For example, it becomes pretty clear who actually speaks Arabic and who is using Google Translate. So when people are trying to help me I can tell who is giving me the more nuanced translations they can say, ‘Oh, that’s not a Libyan accent, it’s a Cairo accent’ and then have other people concur with that.”
- Location: Check to see if the person has enabled location in their tweets. A caution is that this feature is far from foolproof, but it could give you an indication if they’re actually in the place they claim to be tweeting about.
- Network: Who does this person follow and who follows them? These factors can help determine if they have authority on the topic or location in question. Are they trusted by people that you and others trust? Who is retweeting them and who are they retweeting?
- Content: Are they offering specific details that indicate they’re seeing something with their own eyes? Have they shared photos or video to corroborate their information? Can you work to independently verify what they’re tweeting? Do they usually tweet about this kind of event/topic?
- Contact: It’s obvious but bears repeating: send a direct message or reply and start talking to the person to see if they can provide additional details about the information they’re sharing. Ask them to shoot a picture or video. Ask them who they are, where they are etc.
Once again, you see that the above contains similarities to the best practices suggested by Bradshaw. They also echo the advice given by Craig Kanalley, now the traffic and trends editor at Huffington Post, back in 2009.
A few other suggestions from Kanalley’s post about verifying tweets:
1. Timestamp: Anytime something breaks with hundreds of tweets in minutes, like a natural disaster, it’s good to type various keywords and keep paging back until you find the first few tweets about the news. Unless these Tweeters are psychic, they’re probably among the first to have knowledge something’s up and they may have additional context depending on the story.
2. Contextual tweets: Immediately check the Twitter user’s page for related tweets around the tweet you found. You’d be surprised how often someone posts a follow-up tweet later or precedes the ‘breaking tweet’ with other pertinent info. This could provide additional context for the story, but it can also help verify a person, especially if they’re posting pictures or other content from the scene.
4. How many past tweets: Be leery of new Twitter users. If it’s one of their first tweets, it could be anybody starting an account and claiming to have info on a breaking story. The newer the account is, the more skeptical you have to be.
5. What are the past tweets: Check for context by examining the person’s Twitter stream. Go back several pages and see what they normally tweet about. Do they interact with people? Check the accounts they interact with for additional background on piecing together who this person might be. If they say they’re in Paris, are they talking about Paris a month ago? Are they tweeting in French? If not, why not? Evaluate the person and get a feel from them as best you can based on past tweets.
On top of the above advice, you can also read my general guide to online verification, which contains links to other relevant works. I also encourage you to use the social media accuracy checklist created by Mandy Jenkins. As for how to use a checklist and why they’re so powerful, read more here.
So what am I missing? Tell me in the comments.
Correction of the Week
A pull-quote accompanying the winter review of Haya Molnar’s book Under a Red Sky: Memoir of a Childhood m [sic] Communist Romania should have read ‘I wish the world would stop hating Jews because I’m still the same person I was before I knew I was Jewish.’ In an unfortunate typo, the pull-quote that ran replaced the word ‘hating’ with ‘having.’” - Lilith Magazine
Craig Silverman is the editor ofÂ RegretTheError.comÂ and the author ofÂ Regret The Error: How Media Mistakes Pollute the Press and Imperil Free Speech. He is also the editorial director of OpenFile.ca and a columnist for theÂ Toronto Star. Tags: Andy Carvin, citizen journalism, Mark Little, Paul Bradshaw, Regret the Error, social mediaHas America ever needed a media watchdog more than now? Help us by joining CJR today.
As a bonus, here’s a tweet Carvin sent last night in response to a question about how he finds credible people to follow:
Paul Bradshaw writes the Online Journalism Blog and is a leading digital media expert and teacher in Europe. He wrote a post that provides a variety of basic guidelines about online verification, and here’s the section about social media:
How long has the account existed? If it’s only existed since a relevant story broke (e.g. Jan Moir’s column; an earthquake where someone claims to be a witness) then it’s likely to be opportunistic.
Who did the person first ‘follow’ or ‘friend’? These should be personal contacts, or fit the type of person you’re dealing with. If their first follow is ReadWriteWeb, then it may be that you’re not actually dealing with a Daily Mail columnist.
Who first followed them? Likewise, it should be their friends and colleagues.
Who has spoken to them online? Ditto.
Who has spoken about them? Here you may find friends and colleagues, but also people who have rumbled them. But don’t take anyone else’s word for their existence unless you can verify them too.
Can you correlate this account with others? The Firefox extension Identify is a useful tool here: it suggests related social network accounts which you can then try to cross-reference. For companies the Chrome extension Polaris Insights does something similar for companies.
For Twitter you might also try other tools including PeerIndex and Klout, both of which use algorithms to give extra information on the ‘human-ness’ and content of particular accounts. On Facebook there is the social commenting plugin which attempts to give a credibility score to commenters.
Finally, of course, you should try to speak to the person. Phone their office or their employer and confirm whether they do indeed have the account in question.
Last August I spoke with Jaroslav Valuch, who was at the time the project manager for Ushahidi Haiti. Ushahidi is a remarkable project that uses mapping to help provide critical insight and information during a crisis. Its mapping platform is available to anyone here, and is now being used for things other than crisis situations. When an earthquake struck Haiti early last year, Ushahidi worked quickly to map reports of damage, calls for assistance, and other information that could be of use to emergency response teams and international aid organizations.
In the early days, the Haiti team relied a lot on reports coming in from Twitter and other online sources. The challenge was to determine which people and which reports were credible.
“Even though the information from Twitter is not particularly reliable—and things are being retweeted so it’s kind of messy—the basic idea is if you crowdsource the information and put it on one map you can really see the clusters of incidents,” Valuch said. “So even though one particular tweet is not that important, if you have similar reports from the media you can see where the incidents are clustering.”
The lesson here is that by clustering a mass of unverified reports you can start to see trends. One strong caution is that just because something is being repeated doesn’t mean its true. But it can be a valuable indicator, and at the very least it tells you that a piece of information is worth looking into. Also remember Carvin’s point that unconnected sources saying the same thing can be an indicator of accuracy.
Here’s what I wrote in August:
The Ushahidi Haiti team discovered that by mapping the unverified reports, they were able to see if different sources were reporting similar things in similar areas. It was verification by aggregation. They would also attempt to verify tweets by seeing if they were retweeted by trusted sources, checking if the originating Twitter account was followed by people in Haiti, and looking to see if the user had enabled location data in their tweets.
During my conversation with Valuch, I gathered a few other tips for verifying reports on Twitter: