behind the news

Misinformation Propagation

Scientists work to combat false memes
November 4, 2011

Growing up in Rome, Filippo Menczer used to watch the local con artists offer gullible tourists a chance to buy the Coliseum. The scam worked often enough that it spread and other people began doing it, until a combination of police action and human intelligence defeated it. (Well, at least no one tried to sell me the Coliseum when I visited a few years ago.)

Decades later, Menczer is focused on a different kind of propagation: how information, including misinformation, spreads over social networks. He is a professor of informatics and computer science and the director of the Center for Complex Networks and Systems Research at the Indiana University School of Informatics and Computing. He’s also one of the principal investigators of Truthy , “a system to analyze and visualize the diffusion of information on Twitter.”

The Truthy team is examining how information and memes propagate on Twitter. Truthy achieved a measure of fame for spotting political astroturf during the 2010 mid-term elections. (Watch this great video report by The Wall Street Journal to learn more about the basics of the project.)

Its initial work identifying astroturf campaigns provided an indication of how Truthy and projects like it could provide valuable intelligence to journalists. I recently wrote about how Storyful is in some ways providing an outsourced social media verification service for news organizations. Truthy’s focus on information diffusion has the potential to provide critical insight into the characteristics of misinformation and how it spreads. Perhaps one day it can help us detect misinformation before it spreads online, thus avoiding errors, hoaxes, and other falsehoods.

“Our project is meant in general to look at how information is spread online, not just misinformation,” Menczer said. “But that is part of the picture: [if you] understand what a normal pattern is then it can help you also understand what are patterns that may indicate abuse or something that is not normal, or some misinformation or what we call astroturf or spam.”

One barrier to obtaining a clear picture of misinformation is the ever-morphing nature of information and the evolving way we use networks such as social media.

Sign up for CJR's daily email

“It’s an arms race, absolutely,” Menczer told me, referring to the challenge of staying one step ahead of the scammers and liars.

Part of the difficulty is that misinformation can spread the same way as good information. News organizations now work to perfect the sharing of their work over social networks. A similar impulse drives people on the other side of the information dissemination equation—those trying to push out a hoax, spread spam, or infect users with an “I saw a real bad blog abut you” phishing attack on Twitter. The same techniques that help make a good meme also provide a playbook for those seeking to game our attention and networks. An added complication is the challenge of tracking things back to their source, according to Menczer.

“It’s very difficult to distinguish these kinds of patterns from something genuine, especially after a while,” Menczer said. “If people fall for it then they start retweeting as well and then the trace of its fakeness [becomes] buried in the initial moments. We’ve observed this as well, and so we are focused on trying to make this kind of detection very early on because after a while, if something takes off, whether it’s fake or not, it’s hard to tell.”

That means early detection is a must-have for any misinformation detection system.

“Once it’s exploded it’s very hard to beat back,” Menczer said. “… You might later be able to say ‘Oh, that was fake’, but very few people may see that.”

He said the hope is to eventually develop a platform that can be used by journalists and the public to track suspicious memes and information, determine the source, and help evaluate the accuracy. For now, he and the team of researchers in Indiana are focused on tracking the spread of information on Twitter. But the ideal platform would cover a variety of social networks and the web.

That sounds daunting, and Menczer said one of the biggest, messiest, and most complex parts of this problem is us. The human element.

“People are the most complex things that we study,” Menczer said. “We have forecast systems for the weather and we can study subatomic particles and we can study galaxies. Once we understand the physics of it, we have something to go by. And here, there is no physics.”

During our discussion, Menczer identified several elements that could form the basis of a system, though it’s of course a moving target. For now, he said a misinformation detection and debunking system would combine:

  • Network analysis to track how something is spreading and the characteristics of the network of people helping it spread.
  • Content/semantic analysis to examine elements of the message itself, and see if the words or structure of the words are suspect. For example, researchers found there are certain shared characteristics that can help identify fake online reviews. Is the same true for types of misinformation or falsehoods?
  • Sentiment analysis to see, as Menczer said, “Is it the case that, say, negative things are more likely to spread than positive things, or angry things more than happy things?” Does sentiment correlate with hoaxes or spam or misinformation?
  • Temporal dynamics to evaluate spikes and patterns of memes over time. Menczer said his team and others are trying to identify the “temporal dynamics of attention, for example the lifetime of a meme, how quickly it rises in popularity, how it decays, and the actual shapes of these curves.”
  • Human computation/crowdsourcing. Since the human element is a complicating factor in misinformation, it also needs to be part of the solution. Having, for example, an easy way for people to flag something as suspect would be valuable if mass participation can be achieved. Menczer cited the spam button in Gmail as a system to replicate.

    “What happens now is when you get message in Gmail and label it as spam and enough people do that then Gmail will automatically say ‘Oh, yes, this is spam’ and everybody else will not see it,” he said. “That is a good example of a social system or a crowdsourced approach.”

    Today we have organizations like PolitiFact applying a purely human approach to a very specific set of information: public statements by politicians and officials. In order for us to stop false memes on a large scale, it will require a mixture of human and machine computational power and intelligence, working in real time.

    “There’s a timescale at which things are propagating in social media that’s so short,” Menczer said. “We’re talking seconds and minutes rather than hours and days and that [purely human] approach just doesn’t work. So we have to have something automated and that’s what we’re trying to build.”

    How humans and machines will ultimately interact is far from clear at this early stage. It’s encouraging, however, that Menczer said information dissemination and meme tracking are becoming popular areas for researchers.

    “These [social networking] technologies are becoming popular and so that’s where the eyeballs are and that’s where there’s going to be financial and other kinds of incentives [for research] …,” he said. “As we realize that there’s a medium that can be abused, then researchers like us are trying to see what can be done to protect it or to make it somewhat reliable, because otherwise it just becomes noise and pollution and worthless.”

    Are you a researcher or scientist with ideas about what a misinformation detection system should look like? I want to hear from you.

    Correction of the Week

    A story in Saturday’s Real Deal section suggested that a fun thing to do for Halloween is to write “poison” on a plastic jar or bottle and fill it with candy for the kids to eat. A picture that accompanied the story showed a skull and crossbones image similar to the symbol used to indicate something is poisonous. The Citizen understands the need to train children not to touch and never to eat or drink from bottles or jars with that symbol on it, and it was a lapse in judgment for us to have suggested otherwise. For expert poison advice 24 hours a day, anywhere in Ontario, call 1-800-268-9017, or visit the Ontario Poison Centre website

    The Citizen wishes everyone a safe and enjoyable Halloween. — Ottawa Citizen

    Craig Silverman is currently BuzzFeed's media editor, and formerly a fellow at the Tow Center for Digital Journalism.