abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

هذه الصفحة غير متوفرة باللغة العربية وهي معروضة باللغة English

المقال

18 يوليو 2018

الكاتب:
Devin Coldewey, TechCrunch

Google introduces 'AI principles' that prohibit its use in weapons & human rights abuses

"Google's new 'AI principlies' forbid its use in weapons and human rights violations", June 7 2018

Google has published a set of (...) “AI principles” explaining the ways it will and won’t deploy its considerable clout in the domain. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” wrote CEO Sundar Pichai. The principles follow several months of low-level controversy surrounding Project Maven, a contract with the U.S. military that involved image analysis on drone footage. Some employees had opposed the work and even quit in protest, but (...) the issue was a microcosm for anxiety regarding AI at large and how it can and should be employed. The principles themselves are as follows: (1) Be socially beneficial, (2) Avoid creating or reinforcing unfair bias, (3) Be built and tested safely, (4) Be accountable to people, (5) Incorporate privacy design principles, (6) Uphold high standards of scientific excellence, (7) Be made available for uses that accord with these principles... Pichai also outlines what [Google] won't do. Specifically, [it] will not pursue or deploy AI in the following areas:

  • Technologies that cause or are likely to cause overall harm. (Subject to risk/benefit analysis.)
  • Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  • Technologies that gather or use information for surveillance violating internationally accepted norms.
  • Technologies whose purpose contravenes widely accepted principles of international law and human rights.

الجدول الزمني