abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb
Article

21 Nov 2019

Author:
Mark Latonero, Wired

Commentary: AI for good can cause human rights harms without appropriate safeguards

"Opinion: AI for good is often bad," 18 Nov 2019

While AI for good programs often warrant genuine excitement, they should also invite increased scrutiny. Good intentions are not enough when it comes to deploying AI for those in greatest need... [they] can mask root causes and the risks of experimenting with AI on vulnerable people without appropriate safeguards... The last thing society needs is for engineers in enclaves like Silicon Valley to deploy AI tools for global problems they know little about... Researchers have found that facial recognition software, in particular, is often biased against people of color, especially those who are women. This has led to calls for a global moratorium on facial recognition and cities like San Francisco to effectively ban it. AI systems built on limited training data create inaccurate predictive models that lead to unfair outcomes.

... [C]ompanies and their partners need to move from good intentions to accountable actions that mitigate risk... involv[ing] local people closest to the problem in the design process and conduct independent human rights assessments to determine if a project should move forward. [Also refers to Alphabet, Facebook, Google, Huawei, Intel, Palantir]