abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

文章

18 七月 2018

作者:
Devin Coldewey, TechCrunch

Google introduces 'AI principles' that prohibit its use in weapons & human rights abuses

"Google's new 'AI principlies' forbid its use in weapons and human rights violations", June 7 2018

Google has published a set of (...) “AI principles” explaining the ways it will and won’t deploy its considerable clout in the domain. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” wrote CEO Sundar Pichai. The principles follow several months of low-level controversy surrounding Project Maven, a contract with the U.S. military that involved image analysis on drone footage. Some employees had opposed the work and even quit in protest, but (...) the issue was a microcosm for anxiety regarding AI at large and how it can and should be employed. The principles themselves are as follows: (1) Be socially beneficial, (2) Avoid creating or reinforcing unfair bias, (3) Be built and tested safely, (4) Be accountable to people, (5) Incorporate privacy design principles, (6) Uphold high standards of scientific excellence, (7) Be made available for uses that accord with these principles... Pichai also outlines what [Google] won't do. Specifically, [it] will not pursue or deploy AI in the following areas:

  • Technologies that cause or are likely to cause overall harm. (Subject to risk/benefit analysis.)
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

時間線

隱私資訊

本網站使用 cookie 和其他網絡存儲技術。您可以在下方設置您的隱私選項。您所作的更改將立即生效。

有關我們使用網絡儲存技術的更多資訊,請參閱我們的 數據使用和 Cookie 政策

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

分析cookie

ON
OFF

您瀏覽本網頁時我們將以Google Analytics收集信息。接受此cookie將有助我們理解您的瀏覽資訊,並協助我們改善呈現資訊的方法。所有分析資訊都以匿名方式收集,我們並不能用相關資訊得到您的個人信息。谷歌在所有主要瀏覽器中都提供退出Google Analytics的添加應用程式。

市場營銷cookies

ON
OFF

我們從第三方網站獲得企業責任資訊,當中包括社交媒體和搜尋引擎。這些cookie協助我們理解相關瀏覽數據。

您在此網站上的隱私選項

本網站使用 cookie 和其他網絡儲存技術來增強您在必要核心功能之外的體驗。