Tow Center

WhatsApp has a fake news problem—that can be fixed without breaking encryption

August 23, 2018
The WhatsApp icon on a smartphone. Photo: Álvaro Ibáñez via Flickr

Fake news spreads widely and unchecked on WhatsApp. The platform’s executives and representatives claims that because WhatsApp is an end-to-end encrypted network, there’s little they can do to stop this. But we believe that even though WhatsApp cannot read content of messages, it has access to and uses the metadata. We argue and demonstrate how, by using metadata and human content moderation, WhatsApp could stop the spread of fake news, remove misinformation from its network, and even punish bad actors. This approach would prove far more effective than the company’s current efforts at slowing down the spread of all information.

WhatsApp, owned by Facebook, is one of the most popular instant messaging apps globally, with over a billion people using it each month. It is widely used across Asia, Africa, Latin America, and Europe. But increasingly, WhatsApp has also been documented as a leading factor in propagation of fake news, responsible for lynchings, political propaganda, and sectarian clashes. At the same time, because it is an end-to-end encrypted platform, WhatsApp cannot access or see the content being shared by its users and so there is a general view that there is very little it can do. Even company’s own engineers have repeated this view in their interactions with the journalists. (Disclosure: Himanshu Gupta, one of the authors of this piece, worked for Tencent’s messaging and payment app WeChat from 2013 to 2015.)

The severity of the fake news crisis and its impact is forcing many countries to react strongly. India is considering regulations that would require social media platforms and instant messaging apps such as WhatsApp to manage the fake news on their platforms themselves, trace the origin of malicious messages, and be held legally responsible for abetting falsehoods distributed through its services.

We believe that WhatsApp can stop the spread of faked videos, images, and text on its platform and achieve the above aims entirely without breaking end-to-end encryption, using the metadata of files attached to messages, which we posit WhatsApp already has access to, coupled with content moderation.

WhatsApp has recently added a few product tweaks to slow down media sharing and empower group admins to stop the spread of misinformation as it faces criticism over fake news and lynchings. But the company’s approach remains limited in its scope, as it is neither tracking nor moderating the actual content and only relying on certain sharing behaviors as assumed proxies for fake news spreading. Our perspective broadly aligns with the contemporary interdisciplinary view which advocates a combination of user input and interventions by the platforms themselves to thwart the spread of fake news.

Because WhatsApp so rarely communicates with researchers or the press—the company ignored multiple requests for comment on this and earlier drafts of this article—our proposal is based on our use of the product and WhatsApp’s own publicly available disclosures. But other experts in this field support our arguments.

Sign up for CJR's daily email

READ: In Texas, a local public radio show defies the ‘Google it’ age

“Based on publicly observable indicators such as WhatsApp’s ability to cache popular media files and serve them to new users without requiring a re-upload, it seems likely that WhatsApp has the ability to uniquely identify at least some end-to-end encrypted messages, even if they cannot actually peer into their contents, based on various pieces of metadata,” said Vinay Kesari, an independent lawyer based in Bengaluru specializing in technology law and policy. “This is not inconsistent with their assurances of not being able to ‘read’ user messages. It does, however, seem to open up the possibility of being able to track a message (or at least media) once it has been “reported,” stop its spread, and perhaps even trace the source in some cases.”

WhatsApp has a different problem than Twitter or Facebook

Facebook and Twitter are semi-public or public communication networks, with the popular public accounts running into millions of followers. Due to the algorithmic nature of their newsfeeds and additional features such as “trends,” if a “power user” shares a fake story, any follower can offer a corrective by commenting. That comment, once it starts getting traction (for example “likes” or “retweets”), can quickly become a “top comment,” ensuring that everyone on the platform can see the counterargument.

Additionally, both Facebook and Twitter show the timestamp of the original content, along with the handle of the person or the page posting it—allowing readers the opportunity to understand the context. Further, users posts to Facebook and Twitter aren’t encrypted so these companies can potentially remove any content on their platforms once reported by users or flagged by their algorithms.

WhatsApp, on the other hand, is a private communication network made up of one-to-one connections and small groups; the platform limits each chat group size to 256 users. It thus belongs to what The Atlantic writer Alexis Madrigal called “dark social”—closed systems of relationships invisible to the general public. WhatsApp’s network is highly social, but not measurable. There are no easy ways to identify and contact power users—who are part of a lot of WhatsApp groups, and post content often and regularly.

Unlike Facebook or Twitter, it is not possible to look at a WhatsApp message and identify when was it written or originally posted, or by whom. This allows content on WhatsApp to exist devoid of context or time, where a piece of fake content can resurface after years, with repeated devastating effects. Since the size of groups on WhatsApp can’t exceed 256 members, even if a user challenges the veracity of a piece in a group conversation, the rebuttal wouldn’t reach other WhatsApp users who are not part of that specific WhatsApp group. So it may seem like it is very difficult to stop fake news from spreading on WhatsApp.

WhatsApp’s efforts at stopping fake news fall short

WhatsApp has been adding some minor tweaks to its products to check the spread of fake news. In a beta release in 2017, WhatsApp began mentioning to users that a particular message has been forwarded multiple times. More recently, WhatsApp began labeling forwarded messages as “forwarded,” so that a recipient can recognize that the sender isn’t the original author of the message. In both cases, WhatsApp is likely cross-checking metadata to identify forwarded messages, for reasons expanded on below. Additionally, WhatsApp has limited its forwarding option to allow forwarding to only five chats at a time, and the company has removed the “quick forward” button next to media messages.The last change has been made specifically for India where propagation of hoaxes on WhatsApp is being blamed for lynch mobs killing over 20 people in recent months.

However, none of these minor tweaks are enough in themselves. Since “forwards” constitute a significant proportion of messages that a WhatsApp user receives, simply adding information about whether a message is forwarded or not isn’t enough to differentiate fake news from genuine news for a receiver. While adding “friction” to the process of forwarding is good, a fake news message can still continue to travel on the network unchecked forever.

ICYMI: News by refugees, for refugees in Kenya

In summary, none of the measures taken by WhatsApp would eliminate the “fake” pieces of content from its network. Therefore, we propose one such approach here, which includes content moderation. Our premise is that a piece of misinformation that has gone viral merits removal after due fact checking rather than trying to slow down the forwarding capability of all media blindly on the platform.

Metadata to the rescue

We believe there is sufficient evidence to demonstrate that that although WhatsApp can’t read contents of the messages, it reads and stores parts of metadata of every message being sent on its platform, and can use this capability to check the spread of fake news. This assertion is based on the following evidence:

First, even though WhatsApp claims to delete all messages from its servers after delivery, in an affidavit filed by the company in September 2016 the Delhi High Court the company suggests otherwise:

To improve performance and deliver media messages more efficiently, such as when many people are sharing a popular photo or video, WhatsApp may retain that content on its servers for a longer period of time.

In other words, WhatsApp does retain popular files on its servers in order to deliver faster file transfers, resulting in better user experience and also save internet bandwidth of users.

Another experiment reported in detail in Asia Times confirms that Whatsapp stores data on its servers long after they are downloaded, or deleted from the local handset device by the original chat participants.

Second, the implementation of faster file transfer can be inferred by the details explained in WhatsApp’s encryption security paper. As one can observe in practice by using WhatsApp, once a user has downloaded any media attachment (video or image or document) received on any WhatsApp chat from a friend, the user can then “forward” the file instantly to any other contact on WhatsApp. No “file upload to server” takes place which would have consumed Internet bandwidth and taken time too; instead the recipient in this case gets the message/attachment almost instantly. These message attachments are critical to the spread of fake news, as mislabeled or altered videos and images are distributed as “evidence” of fictional crimes that inspire mobs or cause riots.

WhatsApp’s encryption security paper states that WhatsApp uniquely identifies each attachment with a cryptographic hash (a cryptographic text that is unique for each file) and whenever a downloaded attachment is being “forwarded,” WhatsApp checks if a file with the same cryptographic hash already exists on its server. In case the answer is yes, WhatsApp does not upload the file from the user’s phone to the server, and instead sends a copy of the file stored on its server directly to the final recipient. This implementation, while improving the user experience by improving the speed of the file transfer and saving Internet bandwidth of the end-user, also demonstrates that WhatsApp can point to specific files residing on its servers despite the end-to-end encryption. Hence, it has the capability to track a specific piece of content on its platform even if it does not know what is the actual content inside that message due to end-to-end encryption.

Additionally, we believe that WhatsApp tracks and stores messages and metadata for not just attachments as mentioned in its encryption security paper but even for text messages. In a beta build from a year ago, WhatsApp had been displaying if a particular message had been forwarded multiple times in the past. The feature, which should ideally work by matching the cryptographic hash, worked for text messages as well. This suggests that WhatsApp stores all messages and their respective metadata on its server.

Third, WhatsApp’s own privacy policy suggests that it can read the metadata:

WhatsApp may retain date and time stamp information associated with successfully delivered messages and the mobile phone numbers involved in the messages, as well as any other information which WhatsApp is legally compelled to collect.

Lastly, WhatsApp changed its terms of service in August 2016 to say that it would be sharing phone number and metadata attributes such as last seen with Facebook (but not chat messages since they are end-to-end encrypted). To a TechCrunch enquiry, Facebook said the sharing of data would lead to “better friend suggestions” and “more relevant ads” for a WhatsApp user if s/he is using Facebook. Kashmir Hill of Gizmodo wrote that Facebook may be using the metadata information from WhatsApp for improving its “People You May Know” feature:

In 2014, it(Facebook) bought WhatsApp, which would theoretically give it direct insight into who messages who. Facebook says it doesn’t currently use information from WhatsApp for People You May Know, though a close read of its privacy policy shows that it’s given itself the right to do so.

Therefore, even if WhatsApp can’t actually read the contents of a message, it can access the unique cryptographic hash of that message (which it uses to enable instant forwarding), the time the message was sent, and other metadata. It can also potentially determine who sent a particular file to whom. In short, it can track a message’s journey on its platform (and thereby, fake news) and identify the originator of that message.

If WhatsApp can identify a particular a message’s metadata precisely, it can tag that message as “fake news” after appropriate content moderation. It can be argued that WhatsApp can also, with some tweaks to its algorithm, identify the original sender of a fake news image, video, or text and potentially also stop that content from further spreading on its network.

Fixing fake news with content moderation

To identify whether a message is fake or not, we suggest that WhatsApp rely on its user community to forward suspected messages to its content moderation system. For the purpose of fake news moderation, WhatsApp could create a new business account for itself and call it “Fake news moderator.” A business account is a new kind of WhatsApp account, which has access to WhatsApp business APIs and allows a business with multiple customer service agents to serve a single customer on a single chat and even use chatbots for auto-replies, as against a normal WhatsApp account, which can be used by only one human user at a time. Users can “report” any suspect fake news forwards, images or videos that they’ve received to WhatsApp by pressing “tap and hold” and selecting a newly created option of “report this message,” and doing so would “forward” those pieces of content to WhatsApp’s fake news verifier account. WhatsApp’s content moderators can then check the veracity of the content and tag it as “fake” or “genuine,” and reply back to the person who reported the content with its analysis, similar to how Facebook and Twitter do it. In cases where WhatsApp deems a content as “fake,” by matching the cryptographic hash of the fake news videos or images—or by matching the content in case of text messages—WhatsApp can choose to block specific content from being forwarded further on its platform, or show a warning message next to that image or video inside WhatsApp chats that this content is likely “fake,” suggesting users to explore further outside of WhatsApp. With machine learning and artificial intelligence, this content moderation process can be scaled so that a same file or message doesn’t need to be manually checked again and again.

“Given that there is growing pressure on WhatsApp and similar encrypted communications platforms to prevent the spread of malicious misinformation, this would be a useful option to explore, provided it is possible to carefully circumscribe it to prevent misuse by oppressive states or other actors, and does not introduce any significant technical vulnerabilities,” Kesari said. “If such a solution can be implemented, it could help prevent disruptive regulations from being imposed by countries such as India in the areas of content regulation or encryption, which could result in the security of these messaging platforms being irretrievably compromised.”

Since WhatsApp has over a billion monthly active users, reporting of messages even by a million users can overwhelm its content moderation teams. WhatsApp can use metadata-based signals such as the number of times a message has been shared, or its velocity of propagation on WhatsApp’s network, to determine how quickly it should review a message.

Facebook, which is the owner of WhatsApp, is already performing fake news moderation on its platform in the wake of several investigations into its role in US election meddling. It can use the existing infrastructure of its 20,000 human content moderators to manage content moderation on WhatsApp too.

With great platform power, comes great responsibility of managing content and privacy

WhatsApp could easily deny the plausibility of the mechanism we suggested by claiming that the company isn’t storing sufficient metadata to stop fake news. As argued above, we believe that this is not the case, even if WhatsApp’s apparent purpose for using the metadata until now has been to improve the user experience with faster file transfers, and enable its parent company Facebook to suggest friends one can add, and show more relevant ads. Nidhi Rastogi, a computer security expert and James Hendler (originator of the semantic web) in a recent paper analyzed WhatsApp’s encryption and they claimed that WhatsApp continues to have access to metadata despite the end-to-end encryption such as phone numbers, timestamps, connection frequency, location etc. which continue to be available to WhatsApp.

While WhatsApp may insist that it does not want to get into content moderation, or that building traceability of fake news messages would undermine end-to-end encryption, we suggest the authorities should ask them to do so going forward, considering the increasing severity of the fake news problem. The approach suggested by us ensures that WhatsApp manages the fake news problem itself, but at the same time, respects privacy since the “reporting” of a message to WhatsApp is always user-initiated. We believe this to be a better way to tackle fake news for an encrypted platform rather than building backdoors for governments.

Fake news often spreads like a fatal virus on WhatsApp. With the threat it poses to the well-being of democracies, governments should work with platform providers such as WhatsApp when necessary to ensure fake news is kept in check and stopped from spreading further, while ensuring the bad actors face consequences for their actions.

Both end-to-end encryption and mechanisms to thwart fake news can work together, and it is vital that platforms like WhatsApp become part of this crucial project.

Himanshu Gupta and Harsh Taneja are employed respectively as the Head of Growth at Thumbworks Technologies, a financial technology startup in India, and as Assistant Professor in the College of Media (Advertising) at University of Illinois Urbana-Champaign. Himanshu led the India marketing & strategy for WeChat, Tencent’s hit Messaging app in Asia, between 2013-2015. He has a keen interest in understanding the factors behind the rise and fall of digital platforms. Harsh’s research focuses on how social, commercial and technological factors together shape digital media use. His recent work has examined Global Web Usage and Fake News Audiences in the US. Himanshu and Harsh share a common interest in how emerging media technologies and societies interact and in developing perspectives that combine insights from both industry and academia. They have previously collaborated to write articles warning about the potential consequences of the roll out of Free Basics, a walled garden approach to internet, managed by Facebook in markets with low internet penetration.