Palantir response to the use of generative AI in conflict contexts
In response to our request for more information about Palantir's human rights due diligence practices surrounding the application of generative artificial intelligence (AI) to conflict settings, a Palantir spokesperson cited the company's Approach to AI Ethics:
...We assert an ethics of technology that applies to the full contexts of its use. These contexts each implicate their own situated set of domain-specific demands, functional expectations, and ethical obligations. This framing compels us to put AI in its appropriate place: as a tool among other tools of varying sophistication and inexorably embedded in a world of tangible actions and consequences.
The implications of this assertion are especially evidenced in our extensive engagement on questions surrounding the use of AI in military applications. In one blog post, we outline considerations undergirding our belief that “providers of technology involved in non-lethal and especially lethal use of force bear a responsibility to understand and confront the relevant ethical concerns and considerations surrounding the application of their products” and that “[t]his responsibility becomes all the more important the deeper technology becomes embedded in some of the most consequential decision- making processes.”..
Palantir's full response is included in the PDF above.