Commentary: Both ethics & legislation are needed to address risks associated with AI
Author: Daniel Susser, Slate, Published on: 30 April 2019
"Ethics alone can't fix big tech," 17 April 2019
Researchers, policymakers, and activists are trying to figure out how to ensure that [AI] systems reflect and respect important human values... Such questions are at the heart of what is often called "A.I. ethics"... Technology companies are rushing to prove their ethics bona fides: Microsoft announced “AI Principles” to guide internal research and development, Salesforce hired a “chief ethical and humane use officer,” and Google rolled out—and then, facing intense criticism, dissolved—an ethics advisory board... Kate Crawford, co-founder of NYU’s AI Now Institute, argues that the fundamental problem with these approaches is their reliance on corporate self-policing and suggests moving toward external oversight instead. University of Washington professor Anna Lauren Hoffmann agrees but points out that there are plenty of people inside the big tech companies organizing to pressure their employers to build technology for good. She argues we ought to work to empower them.
... Ethics can provide blueprints for good tech, but it can’t implement them... Unlike ethics, law and policy are backed by the coercive force of the state. Taken together, this means we need new laws to place hard constraints on how A.I. is used and policy to drive more flexible external oversight... The purpose of ethics boards—as well as chief ethics officers, internal “AI principles,” and so on—should be to raise awareness... drive self-criticism and... serve as a conscience for the tech industry.