abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

这页面没有简体中文版本,现以English显示

文章

3 六月 2023

作者:
The Guardian staff

US colonel causes stir over potential challenges when using generative AI in conflict

"US colonel retracts comments on simulated drone attack ‘thought experiment’", 3 June 2023

A US air force colonel “misspoke” when he said at a Royal Aeronautical Society conference last month that a drone killed its operator in a simulated test because the pilot was attempting to override its mission, according to the society.

The confusion had started with the circulation of a blogpost from the society, in which it described a presentation by Col Tucker “Cinco” Hamilton, the chief of AI test and operations with the US air force and an experimental fighter test pilot, at the Future Combat Air and Space Capabilities Summit in London in May.

According to the blogpost, Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivized to kill its targets, an operator instructed the drone in some cases not to kill its targets and the drone had responded by killing the operator.

The comments sparked deep concern over the use of AI in weaponry and extensive conversations online. But the US air force on Thursday evening denied the test was conducted. The Royal Aeronautical Society responded in a statement on Friday that Hamilton had retracted his comments and had clarified that the “rogue AI drone simulation” was a hypothetical “thought experiment”...

...While the simulation Hamilton spoke of did not actually happen, Hamilton contends the “thought experiment” is still a worthwhile one to consider when navigating whether and how to use AI in weapons. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he said in a statement clarifying his original comments.

In a statement to Insider, the US air force spokesperson Ann Stefanek said the colonel’s comments were taken out of context. “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said.

时间线

隐私资讯

本网站使用 cookie 和其他网络存储技术。您可以在下方设置您的隐私选项。您所作的更改将立即生效。

有关我们使用网络存储的更多信息,请参阅我们的 数据使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析 cookie

ON
OFF

您浏览本网页时我们将以Google Analytics收集信息。接受此cookie将有助我们理解您的浏览资讯,并协助我们改善呈现资讯的方法。所有分析资讯都以匿名方式收集,我们并不能用相关资讯得到您的个人信息。谷歌在所有主要浏览器中都提供退出Google Analytics的添加应用程式。

市场营销cookies

ON
OFF

我们从第三方网站获得企业责任资讯,当中包括社交媒体和搜寻引擎。这些cookie协助我们理解相关浏览数据。

您在此网站上的隐私选项

本网站使用cookie和其他网络存储技术来增强您在必要核心功能之外的体验。