abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube

Esta página no está disponible en Español y está siendo mostrada en English

Artículo

1 feb 2026

Autor:
Gerrit De Vynck, Washington Post

Israel/OPT: Google 'breached' AI policies by helping Israeli military contractor analyse drone footage, former employee alleges in complaint

Alegaciones

"Google helped Israeli military contractor with AI, whistleblower alleges", 1 February 2026

Google breached its own policies that barred use of artificial intelligence for weapons or surveillance in 2024 by helping an Israeli military contractor analyze drone video footage, a former Google employee alleged in a confidential federal whistleblower complaint...

In July 2024, Google’s cloud-computing division received a customer support request from a person using an Israel Defense Forces email address...The name on the customer support request matches a publicly listed employee of Israeli tech firm CloudEx, which the complaint to the SEC alleges is an IDF contractor.

The request from the IDF email address asked for help making Google’s Gemini more reliable at identifying objects such as drones, armored vehicles and soldiers in aerial video footage...Staff in Google’s cloud unit responded by making suggestions and doing internal tests, the documents said.

At the time, Google’s public “AI principles” stated the company would not deploy AI technology in relation to weapons, or to surveillance “violating internationally accepted norms.” The whistleblower complaint alleges that the IDF contractor’s use contradicted both policies.

The complaint to the SEC alleges that Google broke securities laws because by contradicting its own publicly stated policies, which had also been included in federal filings, the company misled investors and regulators.

“Many of my projects at Google have gone through their internal AI ethics review process,” the former employee who filed the complaint said in a statement to The Post...“That process is robust and as employees we are regularly reminded of how important the company’s AI Principles are. But when it came to Israel and Gaza, the opposite was true. … I filed with the SEC because I felt the company needed to be held accountable for this double standard.”

A Google spokesperson contested the whistleblower’s allegations and said the company did not violate its AI principles because the account’s usage of its AI services was too small to be “meaningful.”

“We answered a general use question, as we would for any customer, with standard, help desk information, and did not provide any further technical assistance,” a statement provided by the spokesperson said. “The ticket originated from an account with less than a couple hundred dollars of monthly spend on AI products, which makes any meaningful usage of AI impossible.”

Google documentation for its “cloud video intelligence” service says that tracking objects in video is free for the first 1,000 minutes and then costs 15 cents per minute.

A spokesperson for the SEC declined to comment...

Representatives for the IDF and CloudEx did not respond to requests for comment...

The complaint to the SEC claims that the use of Gemini described in the internal Google documents was related to Israel’s operations in Gaza without citing specific evidence...