abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Diese Seite ist nicht auf Deutsch verfügbar und wird angezeigt auf English

Artikel

2 Apr 2025

Autor:
Mevlut Ozkan, Anadolu Agency

Google, Amazon, & Microsoft allegedly complicit in war crimes amid Israel's war in Gaza

Alle Tags anzeigen Anschuldigungen

"Israel’s AI use in Gaza potentially normalizes civilian killings, obscures blame, exposes Big Tech complicity: Expert", 2 April 2025

Israel’s use of artificial intelligence (AI) in its ongoing assault on the Gaza Strip – aided by tech giants such as Google, Microsoft, and Amazon – is fueling concerns over the normalization of mass civilian casualties and raising serious questions about the complicity of these firms in potential war crimes, according to a leading AI expert.

Multiple reports have confirmed that Israel has deployed AI models such as Lavender, Gospel, and Where’s Daddy? to conduct mass surveillance, identify targets, and direct strikes against tens of thousands of individuals in Gaza – often in their own homes – all with minimal human oversight.

Rights groups and experts say these systems have played a critical role in Israel’s incessant and apparently indiscriminate attacks, which have laid to waste massive swaths of the besieged enclave and killed more than 50,000 Palestinians, mostly women and children.

“With the explicit use of AI models that we know lack precision accuracy, we are only going to see the normalization of mass civilian casualties, as we have kind of seen with Gaza,” Heidy Khlaaf, a former systems safety engineer at OpenAI, told Anadolu.

...

She stressed that Israel is using AI systems at “almost every stage” of its military operations – from intelligence collection and planning to final target selection.

The AI models, she explained, are trained on a variety of data sources, including satellite imagery, intercepted communications, drone surveillance, and the tracking of individuals or groups.

...

However, she emphasized that these predictions “do not necessarily reflect reality.”

Khlaaf pointed to recent revelations that commercial large language models (LLMs) like Google’s Gemini and OpenAI’s GPT-4 were used by the Israeli military to translate and transcribe intercepted Palestinian communications, automatically adding individuals to target lists “purely based on keywords.”

She noted that various investigations have confirmed that one of the Israeli military’s operational strategies involves generating large numbers of targets through AI without verifying their accuracy.

...

Automation without accountability

Khlaaf further emphasized that the increasing use of AI in war is setting a dangerous precedent, where accountability is obscured.

“AI is setting this precedent that normalizes inaccurate targeting practices, and because of the sheer scale and complexity of these models, it then becomes impossible to trace their decisions that can hold any individual or military accountable,” she asserted.

Even the so-called “human in the loop” safeguard, often promoted as a fail-safe against AI errors, appears insufficient in the case of the IDF, she added.

Investigations revealed that the humans overseeing Israel’s AI-generated targets operated under “very loose guidance,” casting doubt on whether efforts were even made to minimize civilian casualties, according to Khlaaf.

She warned that the current trajectory could enable militaries to shield themselves from war crime allegations by blaming AI for erroneous targeting.

...

‘Amazon, Google and Microsoft explicitly working with IDF’

Khlaaf confirmed that major US-based tech firms are directly involved in supplying AI and cloud computing capabilities to the Israeli military.

...

Microsoft’s involvement also deepened after October 2023, as Israel relied more on its cloud computing services, AI models, and technical support, she said.

Other companies, including Palantir, have also been linked to Israeli military operations, although details of their roles remain sparse, she added.

Crucially, Khlaaf argued that these partnerships went beyond the sale of general-purpose AI tools.

...

“Amazon, Google and Microsoft are explicitly working with the IDF to develop or allow them to use their technologies for intelligence and targeting, despite being aware of the risks of AI’s low accuracy rates, their failure modes, and how the IDF intends to use their systems for targeting.”

The implications suggest that tech companies were “complicit and directly enabling” Israeli actions, including those that “would be categorized or ruled as unlawful or that amount to war crimes,” Khlaaf said.

“If it has been determined that the IDF is committing specific war crimes, and the tech companies have guided them in committing those war crimes, then yes, that makes them very much complicit,” she added.

‘An enormous gap’

Khlaaf warned that the world is witnessing “the full embrace of automated targeting without due process or accountability,” a phenomenon backed by increasing investments from Israel, the US Department of Defense, and the EU.

“Our legal and technical frameworks are not prepared for this type of AI-based warfare,” she said.

Although existing international law, such as Article 36 of the 1949 Geneva Convention, mandates legal reviews for new weapons, there are currently no binding international regulations specific to AI-driven military technologies.

Additionally, while the US maintains export controls on specific AI-enabling technologies such as GPUs and certain datasets, there is no “wholesale ban on AI military technology specifically,” she noted.

“There’s an enormous gap there that hasn’t really been addressed as of yet,” Khlaaf said.

Zeitleiste