abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

故事

2 五月 2023

The application of generative AI to warfare raises human rights concerns

Palantir YouTube Demo

Since the launch of ChatGPT in January 2023, generative artificial intelligence (AI) tools have been applied to a variety of industries. The defense sector is no exception.

Defense companies are beginning to apply generative AI to their use of autonomous weapons systems, without clear explanations as to how salient human rights risks will be effectively mitigated. This could lead to situations where biased or inaccurate responses to generative AI queries are relied upon to make life-or-death decisions in times of conflict, without much clarity surrounding accountability or access to remediation. And what happens when autonomous weapons systems malfunction, are hacked or fall into the wrong hands?

As explained by the UN Working Group on Business and Human Rights, heightened due diligence is required for businesses operating in conflict-affected areas, and there are a plethora of salient human rights risks that technology companies must consider in this context. The articles below highlight the various concerns raised by civil society about the development and deployment of military and defense products that are powered by generative AI, including the need for greater transparency surrounding how these AI models are trained, how mistakes are corrected and how human rights violations during times of conflict will be prevented.

For example, Palantir states that the use of "large language models (LLMs) and algorithms must be controlled in the highly regulated and sensitive context" of war to ensure that they are used in a "legal and ethical way", but does not explain further how the company will work to address salient human rights risks including the right to life, the right to privacy and the right to information (namely, mitigating errors based on misinformation). These salient risks apply to the soldiers who are fighting on the ground, civilians caught up in the conflict and vulnerable groups that are being displaced.

The president of the International Committee of the Red Cross (ICRC) announced the following in April 2023:

"We are witnessing the rapid development of autonomous weapon systems, including those controlled by artificial intelligence, together with military interest in loosening the constraints on where – or against what – those weapons will strike. These developments led the International Committee of the Red Cross to call on governments to establish new international constraints that are clear and binding."

Palantir Technologies responded to our request for comment stating that "...[W]e outline considerations undergirding our belief that “providers of technology involved in non-lethal and especially lethal use of force bear a responsibility to understand and confront the relevant ethical concerns and considerations surrounding the application of their products” and that “[t]his responsibility becomes all the more important the deeper technology becomes embedded in some of the most consequential decision- making processes...” Click here to read the company's full response.

企业回应

Palantir Technologies 浏览回应

时间线

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。