Alleged WhatsApp vulnerability risks user's privacy, prompting Meta employees to raise concerns for Palestinians, report says
"This undisclosed WhatsApp vulnerability lets Governments see who you message", 22 May 2024
In March, WhatsApp’s security team issued an internal warning to their colleagues: Despite the software’s powerful encryption, users remained vulnerable to a dangerous form of government surveillance. According to the previously unreported threat assessment obtained by The Intercept, the contents of conversations among the app’s 2 billion users remain secure. But government agencies, the engineers wrote, were “bypassing our encryption” to figure out which users communicate with each other, the membership of private groups, and perhaps even their locations.
The vulnerability is based on “traffic analysis,” a decades-old network-monitoring technique, and relies on surveying internet traffic at a massive national scale. The document makes clear that WhatsApp isn’t the only messaging platform susceptible. But it makes the case that WhatsApp’s owner, Meta, must quickly decide whether to prioritize the functionality of its chat app or the safety of a small but vulnerable segment of its users.
“WhatsApp should mitigate the ongoing exploitation of traffic analysis vulnerabilities that make it possible for nation states to determine who is talking to who,” the assessment urged. “Our at-risk users need robust and viable protections against traffic analysis.”
Against the backdrop of the ongoing war on Gaza, the threat warning raised a disturbing possibility among some employees of Meta. WhatsApp personnel have speculated Israel might be exploiting this vulnerability as part of its program to monitor Palestinians at a time when digital surveillance is helping decide who to kill across the Gaza Strip, four employees told The Intercept.
“WhatsApp has no backdoors and we have no evidence of vulnerabilities in how WhatsApp works,” said Meta spokesperson Christina LoNigro.
EVEN THOUGH THE contents of WhatsApp communications are unreadable, the assessment shows how governments can use their access to internet infrastructure to monitor when and where encrypted communications are occurring, like observing a mail carrier ferrying a sealed envelope. This view into national internet traffic is enough to make powerful inferences about which individuals are conversing with each other, even if the subjects of their conversations remain a mystery. “Even assuming WhatsApp’s encryption is unbreakable,” the assessment reads, “ongoing ‘collect and correlate’ attacks would still break our intended privacy model.”
The WhatsApp threat assessment does not describe specific instances in which it knows this method has been deployed by state actors. But it cites extensive reporting by the New York Times and Amnesty International showing how countries around the world spy on dissident encrypted chat app usage, including WhatsApp, using the very same techniques.
It wasn’t until the April publication of an exposé about Israel’s data-centric approach to war that the WhatsApp threat assessment became a point of tension inside Meta.
A joint report by +972 Magazine and Local Call revealed last month that Israel’s army uses a software system called Lavender to automatically greenlight Palestinians in Gaza for assassination.
The report indicated WhatsApp usage is among the multitude of personal characteristics and digital behaviors the Israeli military uses to mark Palestinians for death, citing a book on AI targeting written by the current commander of Unit 8200, Israel’s equivalent of the NSA.
The Israeli military did not respond to a request for comment, but told The Guardian last month that it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist.”
It was only after the publication of the Lavender exposé and subsequent writing on the topic that a wider swath of Meta staff discovered the March WhatsApp threat assessment, said the four company sources, who spoke on the condition of anonymity, fearing retaliation by their employer. Reading how governments might be able to extract personally identifying metadata from WhatsApp’s encrypted conversations triggered deep concern that this same vulnerability could feed into Lavender or other Israeli military targeting systems.
Efforts to press Meta from within to divulge what it knows about the vulnerability and any potential use by Israel have been fruitless, the sources said, in line with what they describe as a broader pattern of internal censorship against expressions of sympathy or solidarity with Palestinians since the war began.
Meta employees concerned by the possibility their product is putting innocent people in Israeli military crosshairs, among other concerns related to the war, have organized under a campaign they’re calling Metamates 4 Ceasefire. The group has published an open letter signed by more than 80 named staff members. One of its demands is “an end to censorship — stop deleting employee’s words internally.”
Meta spokesperson Andy Stone told The Intercept any workplace discussion of the war is subject to the company’s general workplace conduct rules, and denied such speech has been singled out.
...
Asked what steps the company has taken to shore up the app against traffic analysis, Meta’s spokesperson told The Intercept, “We have a proven track record addressing issues we identify and have worked to hold bad actors accountable. We have the best engineers in the world proactively looking to further harden our systems against any future threats and we will continue to do so.”
The WhatsApp threat assessment notes that beefing up security comes at a cost for an app that prides itself on mass appeal. It will be difficult to better protect users against correlation attacks without making the app worse in other ways, the document explains. For a publicly traded giant like Meta, protecting at-risk users will collide with the company’s profit-driven mandate of making its software as accessible and widely used as possible.