Facial analysis technology often recreates racial & gender bias, says expert

Get RSS feed of these results

All components of this story

Article
28 January 2019

Commentary: Amazon should halt use of facial recognition technology for policing & govt. surveillance

Author: Joy Buolamwini, Medium

In this article... I... address.... criticisms.... made by those with interest in keeping the use, abuse, and technical immaturity of AI systems in the dark... AI services the company provides to law enforcement and other customers can be abused regardless of accuracy... Among the most concerning uses of facial analysis technology involve the bolstering of mass surveillance, the weaponization of AI, and harmful discrimination in law enforcement contexts...  Because this powerful technology is being rapidly developed and adopted without oversight, the Algorithmic Justice League and the Center on Privacy & Technology launched the Safe Face Pledge. The pledge prohibits lethal use of any kind of facial analysis technology including facial recognition and aims to mitigate abuses. 

As an expert on bias in facial analysis technology, I advise Amazon to

1) immediately halt the use of facial recognition and any other kinds of facial analysis technology in high-stakes contexts like policing and government surveillance

2) submit company models currently in use by customers to the National Institute of Standards and Technology benchmark

Read the full post here

Article
28 January 2019

Commentary: Thoughts on recent research paper and associated article on Amazon Rekognition

Author: Dr. Matt Wood, AWS Machine Learning Blog

A research paper and associated article published yesterday made claims about the accuracy of Amazon Rekognition... this research paper and article are misleading and draw false conclusions... The research paper seeks to “expose performance vulnerabilities in commercial facial recognition products,” but uses facial analysis as a proxy... [F]acial analysis and facial recognition are two separate tools; it is not possible to use facial analysis to match faces in the same way as you would in facial recognition... The research paper states that Amazon Rekognition provides low quality facial analysis results. This does not reflect our own extensive testing... The research papers implies that Amazon Rekognition is not improving, and that AWS is not interested in discussing issues around facial recognition. This is false. We are now on our fourth significant version update of Amazon Rekognition.

... We know that facial recognition technology, when used irresponsibly, has risks... It’s also why we clearly recommend in our documentation that facial recognition results should only be used in law enforcement when the results have confidence levels of at least 99%, and even then, only as one artifact of many in a human-driven decision.‎ But, we remain optimistic about the good this technology‎ will provide in society.

Read the full post here

Article
27 July 2018

Amazon recommends 99% or higher confidence match when using facial recognition for law enforcement

Author: Dr. Matt Wood, Amazon blog

"Thoughts on machine learning accuracy," 27 Jul 2018

This blog shares some brief thoughts on machine learning accuracy and bias...  Using Rekognition, the ACLU built a face database using 25,000 publicly available arrest photos and then performed facial similarity searches on that database using public photos of all current members of Congress. They found 28 incorrect matches out of 535... Some thoughts on their claims:

  • The default confidence threshold for facial recognition APIs in Rekognition is 80%, which is good for a broad set of general use cases... but it’s not the right setting for public safety use cases... We recommend 99% for use cases where highly accurate face similarity matches are important...
  • In real-world public safety and law enforcement scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment...,where it can help find lost children, fight against human trafficking, or prevent crimes. 

There’s a difference between using machine learning to identify a food object and using machine learning to determine whether a face match should warrant considering any law enforcement action. The latter is serious business and requires much higher confidence levels. We continue to recommend that customers do not use less than 99% confidence levels for law enforcement matches, and then to only use the matches as one input across others that make sense for each agency.

Read the full post here

Article
9 July 2018

Commentary: When the robot doesn't see dark skin

Author: Joy Buolamwini, The New York Times

When I was a college student using A.I.-powered facial detection software for a coding project, the robot I programmed couldn’t detect my dark-skinned face. I had to borrow my white roommate’s face to finish the assignment... My experience is a reminder that artificial intelligence, often heralded for its potential to change the world, can actually reinforce bias and exclusion... A.I. systems are shaped by the priorities and prejudices — conscious and unconscious — of the people who design them, a phenomenon that I refer to as “the coded gaze.” Research has shown that automated systems that are used to inform decisions about sentencing produce results that are biased against black people and that those used for selecting the targets of online advertising can discriminate based on race and gender.

... Canada has a federal statute governing the use of biometric data in the private sector. Companies like Facebook and Amazon must obtain informed consent to collect citizens’ unique face information. In the European Union, Article 9 of the General Data Protection Regulationrequires express affirmative consent for collection of biometrics from E.U. citizens. Everyday people should support lawmakers, activists and public-interest technologists in demanding transparency, equity and accountability in the use of artificial intelligence that governs our lives.

Read the full post here

Company response
9 July 2018

HireVue response re hiring & algorithmic bias

Author: HireVue

The HireVue team has always been deeply committed to an ethical, rigorous, and ongoing process of testing for and preventing bias in HireVue Assessments models (or algorithms). We are aware that whenever AI algorithms are created, there is a potential for bias to be inherited from humans. This is a vitally important issue and technology vendors mustmeticulously work to prevent and test for bias before an AI-driven technology is ever put to use... 

When HireVue creates an assessment model or algorithm, a primary focus of the development and testing process is testing for bias in input data that will be used during development of the algorithm or model. The HireVue team carefully tests for potential bias against specific groups before, during, and after the development of a model. No model is deployed until such testing has been done and any factors contributing to bias have been removed. Testing continues to be performed as part of an ongoing process of prevention. HireVue data scientists have created an industry-leading process in this emerging area of AI-driven technology, and have presented that process and other best practices to their colleagues at international conferences on artificial intelligence.

Download the full document here