Facial analysis technology often recreates racial & gender bias, says researcher

Get RSS feed of these results

All components of this story

Article
9 July 2018

Commentary: When the robot doesn't see dark skin

Author: Joy Buolamwini, The New York Times

When I was a college student using A.I.-powered facial detection software for a coding project, the robot I programmed couldn’t detect my dark-skinned face. I had to borrow my white roommate’s face to finish the assignment... My experience is a reminder that artificial intelligence, often heralded for its potential to change the world, can actually reinforce bias and exclusion... A.I. systems are shaped by the priorities and prejudices — conscious and unconscious — of the people who design them, a phenomenon that I refer to as “the coded gaze.” Research has shown that automated systems that are used to inform decisions about sentencing produce results that are biased against black people and that those used for selecting the targets of online advertising can discriminate based on race and gender.

... Canada has a federal statute governing the use of biometric data in the private sector. Companies like Facebook and Amazon must obtain informed consent to collect citizens’ unique face information. In the European Union, Article 9 of the General Data Protection Regulationrequires express affirmative consent for collection of biometrics from E.U. citizens. Everyday people should support lawmakers, activists and public-interest technologists in demanding transparency, equity and accountability in the use of artificial intelligence that governs our lives.

Read the full post here

Company response
9 July 2018

HireVue response re hiring & algorithmic bias

Author: HireVue

The HireVue team has always been deeply committed to an ethical, rigorous, and ongoing process of testing for and preventing bias in HireVue Assessments models (or algorithms). We are aware that whenever AI algorithms are created, there is a potential for bias to be inherited from humans. This is a vitally important issue and technology vendors mustmeticulously work to prevent and test for bias before an AI-driven technology is ever put to use... 

When HireVue creates an assessment model or algorithm, a primary focus of the development and testing process is testing for bias in input data that will be used during development of the algorithm or model. The HireVue team carefully tests for potential bias against specific groups before, during, and after the development of a model. No model is deployed until such testing has been done and any factors contributing to bias have been removed. Testing continues to be performed as part of an ongoing process of prevention. HireVue data scientists have created an industry-leading process in this emerging area of AI-driven technology, and have presented that process and other best practices to their colleagues at international conferences on artificial intelligence.

Download the full document here