New study reveals racial bias in facial recognition software
Author: Steve Lohr, The New York Times, Published on: 15 February 2018
"Facial recognition is accurate, if you're a white guy," 9 February 2018
Facial recognition technology is improving... [b]ut the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women, according to a new study... These disparate results... show how some of the biases in the real world can seep into artificial intelligence... The new study also raises broader questions of fairness and accountability in artificial intelligence... [F]acial recognition software is being deployed by companies in various ways, including to help target product pitches based on social media profile pictures. But companies are also experimenting with face identification and other A.I. technology as an ingredient in automated decisions with higher stakes like hiring and lending.
... In her newly published paper, which will be presented at a conference this month, [young African-American computer scientist] Ms. Buolamwini studied the performance of three leading face recognition systems — by Microsoft, IBM and Megvii of China — by classifying how well they could guess the gender of people with different skin tones. She found them all wanting... IBM said in a statement to her that the company had steadily improved its facial analysis software and was “deeply committed” to “unbiased” and “transparent” services. This month, the company said, it will roll out an improved service with a nearly 10-fold increase in accuracy on darker-skinned women. Microsoft said that it had “already taken steps to improve the accuracy of our facial recognition technology” and that it was investing in research “to recognize, understand and remove bias.”... Megvii, whose Face++ software is widely used for identification in online payment and ride-sharing services in China, did not reply to several requests for comment, Ms. Buolamwini said. [also refers to Google]