“People are the most complex things that we study,” Menczer said. “We have forecast systems for the weather and we can study subatomic particles and we can study galaxies. Once we understand the physics of it, we have something to go by. And here, there is no physics.”

During our discussion, Menczer identified several elements that could form the basis of a system, though it’s of course a moving target. For now, he said a misinformation detection and debunking system would combine:

  • Network analysis to track how something is spreading and the characteristics of the network of people helping it spread.
  • Content/semantic analysis to examine elements of the message itself, and see if the words or structure of the words are suspect. For example, researchers found there are certain shared characteristics that can help identify fake online reviews. Is the same true for types of misinformation or falsehoods?
  • Sentiment analysis to see, as Menczer said, “Is it the case that, say, negative things are more likely to spread than positive things, or angry things more than happy things?” Does sentiment correlate with hoaxes or spam or misinformation?
  • Temporal dynamics to evaluate spikes and patterns of memes over time. Menczer said his team and others are trying to identify the “temporal dynamics of attention, for example the lifetime of a meme, how quickly it rises in popularity, how it decays, and the actual shapes of these curves.”
  • Human computation/crowdsourcing. Since the human element is a complicating factor in misinformation, it also needs to be part of the solution. Having, for example, an easy way for people to flag something as suspect would be valuable if mass participation can be achieved. Menczer cited the spam button in Gmail as a system to replicate.

    “What happens now is when you get message in Gmail and label it as spam and enough people do that then Gmail will automatically say ‘Oh, yes, this is spam’ and everybody else will not see it,” he said. “That is a good example of a social system or a crowdsourced approach.”

    Today we have organizations like PolitiFact applying a purely human approach to a very specific set of information: public statements by politicians and officials. In order for us to stop false memes on a large scale, it will require a mixture of human and machine computational power and intelligence, working in real time.

    “There’s a timescale at which things are propagating in social media that’s so short,” Menczer said. “We’re talking seconds and minutes rather than hours and days and that [purely human] approach just doesn’t work. So we have to have something automated and that’s what we’re trying to build.”

    How humans and machines will ultimately interact is far from clear at this early stage. It’s encouraging, however, that Menczer said information dissemination and meme tracking are becoming popular areas for researchers.

    “These [social networking] technologies are becoming popular and so that’s where the eyeballs are and that’s where there’s going to be financial and other kinds of incentives [for research] …,” he said. “As we realize that there’s a medium that can be abused, then researchers like us are trying to see what can be done to protect it or to make it somewhat reliable, because otherwise it just becomes noise and pollution and worthless.”

    Are you a researcher or scientist with ideas about what a misinformation detection system should look like? I want to hear from you.

    Correction of the Week

    A story in Saturday’s Real Deal section suggested that a fun thing to do for Halloween is to write “poison” on a plastic jar or bottle and fill it with candy for the kids to eat. A picture that accompanied the story showed a skull and crossbones image similar to the symbol used to indicate something is poisonous. The Citizen understands the need to train children not to touch and never to eat or drink from bottles or jars with that symbol on it, and it was a lapse in judgment for us to have suggested otherwise. For expert poison advice 24 hours a day, anywhere in Ontario, call 1-800-268-9017, or visit the Ontario Poison Centre website

    Craig Silverman is the editor of RegretTheError.com and the author of Regret The Error: How Media Mistakes Pollute the Press and Imperil Free Speech. He is also the editorial director of OpenFile.ca and a columnist for the Toronto Star.