abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb
Article

26 Apr 2023

Author:
Matthew Gault, VICE

Palantir claims applying generative AI to warfare is "ethical" without addressing problems of LLMs

"Palantir Demos AI to Fight Wars But Says It Will Be Totally Ethical Don’t Worry About It" 26 April 2023

The company says its Artificial Intelligence Platform will integrate AI into military decision-making in a legal and ethical way. Palantir, the company of billionaire Peter Thiel, is launching Palantir Artificial Intelligence Platform (AIP), software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos, Palantir demos how a military might use AIP to fight a war. In the video, the operator uses a ChatGPT-style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of enemy communications.

In Palantir’s scenario, a “military operator responsible for monitoring activity within eastern Europe” receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be...

...Then the operator asks the robots what to do about it. “The operator uses AIP to generate three possible courses of action to target this enemy equipment,” the video said. “Next they use AIP to automatically send these options up the chain of command.” The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems...

...While there is a “human in the loop” in the AIP demo, they seem to do little more than ask the chatbot what to do and then approve its actions. Drone warfare has already abstracted warfare, making it easier for people to kill vast distances with the push of a button. The consequences of those systems are well documented. In Palantir’s vision of the military’s future, more systems would be automated and abstracted...What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. “LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way,” the pitch said...

...What AIP does not do is walk through how it plans to deal with the various pernicious problems of LLMs and what the consequences might be in a military context. AIP does not appear to offer solutions to those problems beyond “frameworks” and “guardrails” it promises will make the use of military AI “ethical” and “legal.”

Timeline