USA: Investors file resolutions with companies at risk for human rights violations due to govt. contracts related to immigration
In December 2018, investor members of the Interfaith Center on Corporate Responsibility launched a corporate campaign focused on a group of six companies across the private prison, e-commerce, banking and defense sectors, deemed at risk for human rights violations as a result of government contracts that support President Trump’s “zero-tolerance” immigration policies. Investors have filed shareholder resolutions at the following companies:
- CoreCivic (allegations of forced labor and failure to provide medical assistance to detainees)
- GEO Group (issues relating to safety, detainee rights, and medical care)
- We invited CoreCivic to respond; response provided
- We invited GEO Group to respond; it did not
- We previously invited CoreCivic and GEO Group to respond to allegations that they were profiting from the detention of immigrant and asylum-seeking families and the separation of families at the US-Mexico border. More information and responses from both companies are available here.
- Amazon (sale of facial recognition technology to government agencies including ICE and state law enforcement)
- We invited Amazon to respond; response provided. More information about allegations related to its facial recognition technology is available here.
- SunTrust (funding for MVM, Inc. and Comprehensive Health Services, which are directly contracted with U.S. government agencies carrying out the “zero tolerance” immigration policy)
- Wells Fargo (providing revolving credit and term loans to GEO; letters of credit on CoreCivic’s behalf; and underwriting bonds for both GEO and CoreCivic)
- We invited SunTrust to respond; it did not
- We invited Wells Fargo to respond; response provided
- We previously invited SunTrust and Wells Fargo to respond to allegations that bank and investor financing for CoreCivic & GEO Group supports these two companies to profit from the Trump Administration's harsh immigration policies. Wells Fargo responded, SunTrust declined to respond. More information is available here.
- We previously invited MVM, Inc. and Comprehensive Heath Services to respond to allegations of profiting from the detention of immigrant and asylum-seeking families. Both companies responses. More information is available here.
- Northrop Grumman (racial bias, privacy and surveillance via Homeland Advanced Recognition Technology (HART) database developed for the Department of Homeland Security)
- We invited Northrop Grumman to respond; response provided
All components of this story
Author: Amazon shareholders
...[S]hareholders are concerned Amazon’s facial recognition technology (“Rekognition”) poses risk to civil and human rights and shareholder value... Shareholders have little evidence our Company is effectively restricting the use of Rekognition to protect privacy and civil rights... [S]hareholders request that the Board of Directors prohibit sales of facial recognition technology to government agencies unless the Board concludes, after an evaluation using independent evidence, that the technology does not cause or contribute to actual or potential violations of civil and human rights.
Author: Northrop Grumman shareholders
While Northrop Grumman adopted a Human Rights Policy in 2013, it does not disclose how the policy is operationalized to reduce the risks that the company may cause or contribute to adverse human rights impacts. Investors are unable to assess how Northrop Grumman embeds respect for human rights into the process for vetting and implementing contracts with the U.S. Government or foreign governments, or the effectiveness of any systems which may be in place to prevent or mitigate human rights risks... Shareholders request that the Board of Directors prepare a report, at reasonable cost and omitting proprietary information, on Northrop Grumman’s management systems and processes to implement its Human Rights Policy.
Author: Wells Fargo shareholders
[S]hareholders of Wells Fargo & Company... urge the Board of Directors (the “Board”) to report to shareholders by December 31, 2019 on how WFC is identifying and addressing human rights risks to WFC related to the Trump Administration’s aggressive immigration enforcement policy, which aims to prosecute all persons who enter or attempt to enter the United States (U.S.), including the detention without parole of asylum-seekers and the separation of minor children from parents accused of entering the U.S. illegally... WFC has come under fire for its relationships with GEO Group and CoreCivic, private prison companies that contract with ICE and benefit from more aggressive immigration enforcement... WFC has played an important role in financing GEO and CoreCivic’s businesses: WFC is co-syndication agent for the bank group providing revolving credit and term loans to GEO; has issued letters of credit on CoreCivic’s behalf; and has underwritten bonds for both GEO and CoreCivic.
Author: Wells Fargo
We are engaging with the SEIU about the proposal and remain committed to respecting human rights throughout our operations and our products and services. For further comment, we recommend reaching out directly to SEIU.
- Related stories: USA: Investors file resolutions with companies at risk for human rights violations due to govt. contracts related to immigration
- Related companies: Wells Fargo
Author: Dr. Matt Wood, Amazon blog
"Thoughts on machine learning accuracy," 27 Jul 2018
This blog shares some brief thoughts on machine learning accuracy and bias... Using Rekognition, the ACLU built a face database using 25,000 publicly available arrest photos and then performed facial similarity searches on that database using public photos of all current members of Congress. They found 28 incorrect matches out of 535... Some thoughts on their claims:
- The default confidence threshold for facial recognition APIs in Rekognition is 80%, which is good for a broad set of general use cases... but it’s not the right setting for public safety use cases... We recommend 99% for use cases where highly accurate face similarity matches are important...
- In real-world public safety and law enforcement scenarios, Amazon Rekognition is almost exclusively used to help narrow the field and allow humans to expeditiously review and consider options using their judgment...,where it can help find lost children, fight against human trafficking, or prevent crimes.
There’s a difference between using machine learning to identify a food object and using machine learning to determine whether a face match should warrant considering any law enforcement action. The latter is serious business and requires much higher confidence levels. We continue to recommend that customers do not use less than 99% confidence levels for law enforcement matches, and then to only use the matches as one input across others that make sense for each agency.
- Related stories: Facial analysis technology often recreates racial & gender bias, says expert Shareholders & civil society groups urge Amazon to halt sale of facial recognition software to law enforcement agencies USA: Investors file resolutions with companies at risk for human rights violations due to govt. contracts related to immigration Show moreShow less
Amazon general manager of artificial intelligence highlights positive uses of Rekognition & acceptable use policy
Author: Dr. Matt Wood, Amazon Web Services Maching Learning Blog
"Some quick thoughts on the public discussion regarding facial recognition and Amazon Rekognition this past week," 1 June 2018
Amazon Rekognition... makes use of new technologies – such as deep learning – and puts them in the hands of developers in an easy-to-use, low-cost way... [W]e have seen customers use the image and video analysis capabilities of Amazon Rekognition in ways that materially benefit both society (e.g. preventing human trafficking, inhibiting child exploitation, reuniting missing children with their families...), and organizations (enhancing security through multi-factor authentication)... Amazon Web Services (AWS) is not the only provider of services like these, and we remain excited about how image and video analysis can be a driver for good in the world, including in the public sector and law enforcement. There has been no reported law enforcement abuse of Amazon Rekognition. We also have an Acceptable Use Policy (“AUP”) that prohibits the use of our services for “[a]ny activities that are illegal, that violate the rights of others, or that may be harmful to others.” This includes violating anybody’s Constitutional rights relating to the 4th, 5th, and 14th Amendments – essentially any kind of illegal discrimination or violation of due process or privacy right. Customers in violation of our AUP are prevented from using our services.
...There have always been and will always be risks with new technology capabilities... AWS takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future.