abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

26 四月 2023

作者:
Matthew Gault, VICE

Palantir claims applying generative AI to warfare is "ethical" without addressing problems of LLMs

"Palantir Demos AI to Fight Wars But Says It Will Be Totally Ethical Don’t Worry About It" 26 April 2023

The company says its Artificial Intelligence Platform will integrate AI into military decision-making in a legal and ethical way. Palantir, the company of billionaire Peter Thiel, is launching Palantir Artificial Intelligence Platform (AIP), software meant to run large language models like GPT-4 and alternatives on private networks. In one of its pitch videos, Palantir demos how a military might use AIP to fight a war. In the video, the operator uses a ChatGPT-style chatbot to order drone reconnaissance, generate several plans of attack, and organize the jamming of enemy communications.

In Palantir’s scenario, a “military operator responsible for monitoring activity within eastern Europe” receives an alert from AIP that an enemy is amassing military equipment near friendly forces. The operator then asks the chatbot to show them more details, gets a little more information, and then asks the AI to guess what the units might be...

...Then the operator asks the robots what to do about it. “The operator uses AIP to generate three possible courses of action to target this enemy equipment,” the video said. “Next they use AIP to automatically send these options up the chain of command.” The options include attacking the tank with an F-16, long range artillery, or Javelin missiles. According to the video, the AI will even let everyone know if nearby troops have enough Javelins to conduct the mission and automate the jamming systems...

...While there is a “human in the loop” in the AIP demo, they seem to do little more than ask the chatbot what to do and then approve its actions. Drone warfare has already abstracted warfare, making it easier for people to kill vast distances with the push of a button. The consequences of those systems are well documented. In Palantir’s vision of the military’s future, more systems would be automated and abstracted...What Palantir is offering is the illusion of safety and control for the Pentagon as it begins to adopt AI. “LLMs and algorithms must be controlled in this highly regulated and sensitive context to ensure that they are used in a legal and ethical way,” the pitch said...

...What AIP does not do is walk through how it plans to deal with the various pernicious problems of LLMs and what the consequences might be in a military context. AIP does not appear to offer solutions to those problems beyond “frameworks” and “guardrails” it promises will make the use of military AI “ethical” and “legal.”

時間線

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。