Commentary: Racial literacy needed to avoid racial bias in AI technology
""Color-blindness" is a bad approach to solving bias in algorithms", 3 April 2019
To forge an ethical AI, we need to include racial literacy... In the tech world, that means considering race in the initial phase of product development and recognizing the way the broader social world seeps into technological design, infrastructure, and implementation to unintentionally reproduce racism. While some argue that the highest ethical standard in technology is to be color blind, neither research nor experience bear this out.
... [I]t’s not just people perpetuating racial bias. The algorithms that are at the center of AI reproduce existing inequalities, too... [T]he tech industry has made attempts at addressing bias. This has mostly been through implicit bias trainings... but after two decades, the promise of implicit bias as a solution to racial bias has not paid off... f people at levels in the tech industry were to ask basic racial-literacy questions, then these unanticipated outcomes might be more predictable... We need racial literacy for deciphering propaganda online, too... Increasing racial literacy will certainly help with what one former Facebook executive called the “black people problem.” “The widespread underrepresentation of faces of color in tech is already alarming,” says Mark S. Luckie, who recently left the social-media company, but not before he issued a public memo on the lack of attention to racial issues at the company. Luckie contends that Facebook is failing black employees and black users, who are often overrepresented as users but make up only 4% of the company’s workforce.