From the beginning the story seemed suspect, but that didn’t stop the New York Post‘s report last month of a surgically enhanced, three-breasted woman from overtaking the internet. By the time the story was debunked, just a day later, the tabloid-friendly tale had already made the rounds, generating posts on BuzzFeed, The Week, The Telegraph and the New York Daily News. Together, those stories were shared almost 190,000 times on Facebook, Twitter, and Google+. But stories correcting the record, like TMZ’s “3-Boobed Woman a Fake,” generated about a third as many shares.
That’s a problem. The internet makes it easy to spread misinformation–a bad article picked up, unverified, in dozens of blog posts, its reach furthered by tweets and Facebook shares. But as easy as it is to spark a rumor, it’s difficult to squash it. In pre-digital days, a newspaper could write a correction in the next day’s issue. Now, by the time a story’s debunked, it has already traveled, with no guarantee that readers will navigate back to the original source–or any source at all–to see a corrected version. Some research suggests that most story sharing happens within the first few hours of its publication, before the crowdsourced factchecking churn determines its accuracy. By then, the damage is already done. And if there’s no natural way for a correction to make its way to a reader, how can those same readers trust anything they read on the Web?
While there’s a host of publications that debunk misinformation–from FactCheck.org, a political checker run out of the University of Pennsylvania’s Annenberg School, to Caitlin Dewey’s Washington Post column “What Was Fake on The Internet This Week”–the record-correcting articles aren’t always as readable, or shareable, as the original.
“Everyone knows the original sensational stories are much more interesting than the mundane corrections,” says David Mikkelson, part of the husband-and-wife team that runs the rumor factchecking website Snopes. While Mikkelson says his debunks draw more traffic if the originals are big, there are still fewer people reading his stories than the incorrect versions.
There isn’t a lot of data tracking how people read corrections on the Web. But the bit of data available, like a study published by Facebook last year tracking conversations under posts that had a Snopes debunking article posted underneath, suggest that corrections don’t travel nearly as far as the initial hoax. In the Snopes/Facebook study, debunking did what it was supposed to do–cut the speculative conversation short–but did so because people deleted their initial share rather than resharing the factchecked version.
That’s part of what makes Emergent such a promising new project. Produced by Craig Silverman as part of a fellowship at the Tow Center for Digital Journalism at Columbia University, Emergent tracks suspect stories as they manifest online. After debuting at the recent Online News Association conference, it received a wave of coverage toting the algorithms it uses to track misinformation in real time. But Emergent’s algorithm tracks other useful things too, including how often debunking stories are shared compared to their originals, and what updates (read: corrections) are made to the articles themselves, information that, according to Silverman, will help us fill in the research gap of how corrections spread.
“Digital readers are promiscuous,” he told me, when I asked what the biggest obstacle was to correcting online misinformation. “They’re clicking on links from people in their networks and all these news sites are handling misinformation in different ways.”
Take a story ultimately debunked by Silverman’s site, that in the wake of Ray Rice’s suspension ESPN was hosting a special domestic violence panel–with no women on it. It originated with a strangely worded post by Ben Collins on Esquire, which was corrected with a long explanation of the misunderstanding at the top and a re-worded headline.
Pretty straightforward. But Jezebel, which picked up the item from Esquire, amended its own post headline by adding an “UPDATED,” otherwise keeping the post in its entirety, though it called the non-incident “a remarkable showing of the exact flavor of dumb meathead that ESPN is so adept at.” To find the correction, a reader would have to first get through the entire story to where it’s added at the end.
But that’s better than millennial whisperer site Mic, which didn’t amend its post at all. (Neither author, at Mic or Jezebel, responded when I contacted them for comment.)
Philly.com’s more traditional correction left the original hed–“ESPN to save NFL’s image with all-male domestic abuse discussion”–amending with a correction at the top of the story. But this might be as bad as no correction, since the vast majority of people who see a story on social media never click through–meaning they’re taking the information from the headline alone. For that reason, Silverman suggests that clustering corrections in the headline is key in the digital age.
But Silverman is hoping that Emergent can reach more nuanced conclusions. He’ll be collecting data until the end of November (the site will continue to run after) and analyzing it for patterns, including which debunking articles reach the farthest. That way reporters can have a chance at making the corrections of misinformation go as viral as the original stories–a set of best practices that might be the first step to a more reliable internet.
“Right now debunking, it’s basically like spoiling the fun,” says Silverman. “But these articles could be presented in ways that make them more sharable; presented in ways that make them more enjoyable; presented in ways that make them, frankly, less of an annoyance then they are now.”
He stopped for breath before continuing. “As I’m sitting here listing these things, I’m thinking, ‘Wow, there’s a lot of work to get done.'”
Alexis Sobel Fitts is a senior writer at CJR. Follow her on Twitter at @fittsofalexis.