abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

هذه الصفحة غير متوفرة باللغة العربية وهي معروضة باللغة English

القصة

2 مايو 2023

The application of generative AI to warfare raises human rights concerns

Palantir YouTube Demo

Since the launch of ChatGPT in January 2023, generative artificial intelligence (AI) tools have been applied to a variety of industries. The defense sector is no exception.

Defense companies are beginning to apply generative AI to their use of autonomous weapons systems, without clear explanations as to how salient human rights risks will be effectively mitigated. This could lead to situations where biased or inaccurate responses to generative AI queries are relied upon to make life-or-death decisions in times of conflict, without much clarity surrounding accountability or access to remediation. And what happens when autonomous weapons systems malfunction, are hacked or fall into the wrong hands?

As explained by the UN Working Group on Business and Human Rights, heightened due diligence is required for businesses operating in conflict-affected areas, and there are a plethora of salient human rights risks that technology companies must consider in this context. The articles below highlight the various concerns raised by civil society about the development and deployment of military and defense products that are powered by generative AI, including the need for greater transparency surrounding how these AI models are trained, how mistakes are corrected and how human rights violations during times of conflict will be prevented.

For example, Palantir states that the use of "large language models (LLMs) and algorithms must be controlled in the highly regulated and sensitive context" of war to ensure that they are used in a "legal and ethical way", but does not explain further how the company will work to address salient human rights risks including the right to life, the right to privacy and the right to information (namely, mitigating errors based on misinformation). These salient risks apply to the soldiers who are fighting on the ground, civilians caught up in the conflict and vulnerable groups that are being displaced.

The president of the International Committee of the Red Cross (ICRC) announced the following in April 2023:

"We are witnessing the rapid development of autonomous weapon systems, including those controlled by artificial intelligence, together with military interest in loosening the constraints on where – or against what – those weapons will strike. These developments led the International Committee of the Red Cross to call on governments to establish new international constraints that are clear and binding."

Palantir Technologies responded to our request for comment stating that "...[W]e outline considerations undergirding our belief that “providers of technology involved in non-lethal and especially lethal use of force bear a responsibility to understand and confront the relevant ethical concerns and considerations surrounding the application of their products” and that “[t]his responsibility becomes all the more important the deeper technology becomes embedded in some of the most consequential decision- making processes...” Click here to read the company's full response.

ردود الشركة

Palantir Technologies عرض الرد

الجدول الزمني