Shareholders & civil society groups urge Amazon to halt sale of facial recognition software to law enforcement agencies

Get RSS feed of these results

All components of this story

13 August 2019

USA: Whole Foods employees demand Amazon cease business deals providing ICE with controversial software and technical support

"Whole Foods employees demand Amazon break all ties with ICE and Palantir", 12 August 2019

Pressure is mounting against Amazon’s continued involvement with US Immigration and Customs Enforcement (ICE), with Whole Foods employees now demanding the company cease ties with the agency through its business dealings with the controversial government contractor Palantir...

...Whole Foods employees say they will continue to combat the company by leaking information and attempting to undermine policies and business dealings that lead to the deportation of undocumented people and other rights abuses...

 ...Although Amazon does not provide services directly to ICE, it does provide cloud computing resources to companies like Palantir that do provide ICE with software and other forms of technical support. Amazon has also tried selling its controversial facial recognition ICE, as noted by a letter penned by Amazon employees last year demanding that CEO Jeff Bezos stop selling and attempting to sell the platform to law enforcement.

Amazon has...provided cloud-based database support to numerous agencies related to immigration enforcement, according to a report...that detailed the tech companies connected to the Trump administration’s immigration crackdown...

...[The Verge] reached out to Amazon for comment and will update this story when we hear back.

Read the full post here

22 May 2019

Shareholders vote against proposals seeking to halt Amazon's sale of its facial recognition technology to govt. agencies

Author: Leo Kelion, BBC News

"Amazon heads off facial recognition rebellion" 22 May 2019

Shareholders seeking to halt Amazon's sale of its facial recognition technology to US police forces have been defeated in two votes that sought to pressure the company into a rethink. Civil rights campaigners had said it was "perhaps the most dangerous surveillance technology ever developed". But investors rejected the proposals at the company's annual general meeting...The first vote had proposed that the company should stop offering its Rekognition system to government agencies. The second had called on it to commission an independent study into whether the tech threatened people's civil rights...Amazon has yet to comment...It said that Rekognition had a 0% error rate at classifying lighter-skinned males as such within a test, but a 31.4% error rate at categorising darker-skinned females. Amazon has disputed the findings saying that the researchers had used "an outdated version" of its tool and that its own checks had found "no difference" in gender-classification across ethnicities... opposition to Rekognition has also been voiced by civil liberties groups and hundreds of Amazon's own workers...But one of the directors from Amazon Web Services - the division responsible - had told the BBC that it should be up to politicians to decide if restrictions should be put in place.

Read the full post here

21 May 2019

Amazon faces investor pressure over facial recognition

Author: Natasha Singer, The New York Times

Facial recognition software is coming under increasing scrutiny from civil liberties groups and lawmakers... [Amazon] [s]hareholders have introduced two proposals on facial recognition for a vote. One asks the company to prohibit sales of its facial recognition system, called Amazon Rekognition, to government agencies, unless its board concludes that the technology does not facilitate human rights violations. The other asks the company to commission an independent report examining the extent to which Rekognition may threaten civil, human and privacy rights, and the company’s finances... The proposals are nonbinding, meaning they do not require the company to take action, even if they receive a majority vote. 

... The Amazon shareholder proposals also highlight the rise of activism among investors in the country’s top tech companies. Last year, investors successfully pressured Apple to create stronger parental controls for iPhones... In the coming weeks, shareholders of FacebookTwitter and Alphabet will vote on issues related to election interference, hate speech, disinformation and creating censored services for China... In a letter to the Securities and Exchange Commission... [Amazon] said that it was not aware of any reported misuse of Rekognition by law enforcement customers. It also argued that the technology did not present a financial risk... “The proposals raise only conjecture and speculation about possible risks that might arise” from clients misusing the technology, lawyers for Amazon wrote in the letter. 

Read the full post here

29 April 2019

Shareholders to present slate of proposals at Amazon annual meeting focused on human rights & environmental issues

Author: Interfaith Center on Corporate Responsibility

"Shareholders to present slate of proposals at Amazon annual meeting," 25 April 2019

[I]nvestors... announced that they will have a total of nine proposals on the proxy ballot at Amazon’s... annual meeting... on a variety of environmental, social and governance concerns.... Jared Fernandez of Green Century Capital Management [said]... “While Amazon is now neck-and-neck with Apple as the most valuable public company in the world, its lack of attention to a number of broad environmental, social and governance risks poses legitimate questions about the continued success and resiliency of the company.” 

...investors say the resolutions provide ample evidence that Amazon does not have the appropriate risk mitigation structures in place... One of the concerns articulated in the group of proposals relates to the risks of human/civil rights abuses resulting from the sale of Rekognition, Amazon’s facial recognition technology, as well as the company’s failure to appropriately monitor the sale of offensive, racist products through its e-commerce platform... Investors sent a joint letter to Amazon in November 2018 which underscored concerns and requested meaningful dialogue with management. The letter was endorsed by 114 investors representing over $2.6 trillion in AUM... the letter stated: "In our experience, Amazon has purposefully avoided constructive and substantive dialogue with its shareholders, often necessitating the filing of shareholder resolutions..." Also striking was Amazon’s decision to petition the SEC to omit many of the proposals from its proxy.

Read the full post here

4 April 2019

A win for shareholders in effort to halt sales of Amazon's allegedly racially biased surveillance technology

Author: Open MIC

"A win for shareholders in effort to halt sales of Amazon's racially biased surveillance technology," 

[T]he Securities and Exchange Commission (SEC) ruled late yesterday that Amazon must give shareholders an opportunity to consider and vote on two separate shareholder resolutions that address major business risks posed by the sale of Amazon's facial recognition technology to government agencies. The SEC’s ruling comes amidst mounting criticism of the Amazon technology, “Rekognition,” as racially biased... The two shareholder resolutions, which were filed with Amazon in December, focus on the business risks to the company from sales of Rekognition. One resolution asks Amazon to halt sales of Rekognition to government unless the board “concludes the technology does not pose actual or potential civil and human rights risk;” the other resolution requests the board commission an independent study of Rekognition regarding the extent to which the technology may “endanger, threaten, or violate” privacy or civil rights. The SEC’s decision means the shareholder resolutions will be voted on at the Company’s annual meeting... The shareholder resolution echoes concerns of over 70 civil rights and civil liberties groups, hundreds of Amazon’s own employees, and 150,000 people who signed a petition — all seeking to end sales of Rekognition to government agencies.

Read the full post here

3 April 2019

A.I. experts question Amazon's facial recognition technology

Author: Cade Metz & Natasha Singer, The New York Times

At least 25 prominent artificial-intelligence researchers, including experts at Google, Facebook, Microsoft and a recent winner of the prestigious Turing Award, have signed a letter calling on Amazon to stop selling its facial-recognition technology to law enforcement agencies because it is biased against women and people of color... Some researchers — and even some companies — are arguing the technology cannot be properly controlled without government regulation... “There are no laws or required standards to ensure that Rekognition is used in a manner that does not infringe on civil liberties,” the A.I. researchers wrote. “We call on Amazon to stop selling Rekognition to law enforcement.”

... An Amazon spokeswoman responded, saying that the company had updated its Rekognition service since the M.I.T. researchers completed their study and that it had found no differences in error rates by gender and race when running similar tests... Microsoft, by contrast, improved the accuracy of its facial recognition last year after an earlier M.I.T. study reported that its system was better at identifying the gender of lighter-skinned men in a photo database than darker-skinned women... Amazon has said that it has not received any reports of Rekognition misuse by law enforcement, and that the company’s acceptable use policy prohibits customers from using its services in ways that violate laws.

Read the full post here

28 January 2019

Commentary: Amazon should halt use of facial recognition technology for policing & govt. surveillance

Author: Joy Buolamwini, Medium

In this article... I... address.... criticisms.... made by those with interest in keeping the use, abuse, and technical immaturity of AI systems in the dark... AI services the company provides to law enforcement and other customers can be abused regardless of accuracy... Among the most concerning uses of facial analysis technology involve the bolstering of mass surveillance, the weaponization of AI, and harmful discrimination in law enforcement contexts...  Because this powerful technology is being rapidly developed and adopted without oversight, the Algorithmic Justice League and the Center on Privacy & Technology launched the Safe Face Pledge. The pledge prohibits lethal use of any kind of facial analysis technology including facial recognition and aims to mitigate abuses. 

As an expert on bias in facial analysis technology, I advise Amazon to

1) immediately halt the use of facial recognition and any other kinds of facial analysis technology in high-stakes contexts like policing and government surveillance

2) submit company models currently in use by customers to the National Institute of Standards and Technology benchmark

Read the full post here

28 January 2019

Commentary: Thoughts on recent research paper and associated article on Amazon Rekognition

Author: Dr. Matt Wood, AWS Machine Learning Blog

A research paper and associated article published yesterday made claims about the accuracy of Amazon Rekognition... this research paper and article are misleading and draw false conclusions... The research paper seeks to “expose performance vulnerabilities in commercial facial recognition products,” but uses facial analysis as a proxy... [F]acial analysis and facial recognition are two separate tools; it is not possible to use facial analysis to match faces in the same way as you would in facial recognition... The research paper states that Amazon Rekognition provides low quality facial analysis results. This does not reflect our own extensive testing... The research papers implies that Amazon Rekognition is not improving, and that AWS is not interested in discussing issues around facial recognition. This is false. We are now on our fourth significant version update of Amazon Rekognition.

... We know that facial recognition technology, when used irresponsibly, has risks... It’s also why we clearly recommend in our documentation that facial recognition results should only be used in law enforcement when the results have confidence levels of at least 99%, and even then, only as one artifact of many in a human-driven decision.‎ But, we remain optimistic about the good this technology‎ will provide in society.

Read the full post here

3 December 2018

Commentary: Facial recognition can be positive but companies need to ensure human rights are upheld

Author: Amira Dhalla, Medium

"Amazon's facial recognition technology scares me. You should be scared too."

When used appropriately, the broader society can... benefit from... positive uses of facial recognition... [F]acial recognition can also be harmful as it infringes on individuals’ human rights by gathering and using their data without consent... Government can also use these tools to surveil people... [which] poses a threat to our most vulnerable communities and unjustly targets particular individuals such as people of color... [T]here are currently no rules to govern how this tool is to be used by private or public sectors.

... Amazon has focused on the positive impact of Rekognition but what makes it different from other technology companies is the fact that since the product was conceived, they have been marketing and selling it to law enforcement as a way to track and identify criminal suspects... ACLU formed a coalition of civil rights groups calling on Amazon to stop selling the program to law enforcement... The future of facial recognition can be positive but we need the organizations creating these tools to lead the discussion on how we can create these tools while upholding the rights of all citizens... Amazon has a responsibility to do what it can to ensure that innovative tools are used in ethical ways... We need to pressure Amazon to protect the rights to privacy and freedom of citizens in the United States by suspending the sales of Rekognition to law enforcement and instead starting a dialogue on how we use tools like facial recognition in humane and secure ways.

Read the full post here

27 July 2018

Amazon recommends 99% or higher confidence match when using facial recognition for law enforcement

Author: Dr. Matt Wood, Amazon blog

"Thoughts on machine learning accuracy," 27 Jul 2018

This blog shares some brief thoughts on machine learning accuracy and bias...  Using Rekognition, the ACLU built a face database using 25,000 publicly available arrest photos and then performed facial similarity searches on that database using public photos of all current members of Congress. They found 28 incorrect matches out of 535... Some thoughts on their claims:

  • The default confidence threshold for facial recognition APIs in Rekognition is 80%, which is good for a broad set of general use cases... but it’s not the right setting for public safety use cases... We recommend 99% for use cases where highly accurate face similarity matches are important...
  • In real-world public safety and law enforcement scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment...,where it can help find lost children, fight against human trafficking, or prevent crimes. 

There’s a difference between using machine learning to identify a food object and using machine learning to determine whether a face match should warrant considering any law enforcement action. The latter is serious business and requires much higher confidence levels. We continue to recommend that customers do not use less than 99% confidence levels for law enforcement matches, and then to only use the matches as one input across others that make sense for each agency.

Read the full post here