Technology and Human Rights: Artificial Intelligence

There is a possible future in which artificial intelligence drives inequality, inadvertently divides communities, and is even actively used to deny human rights. But there is an alternative future in which the ability of AI to propose solutions to increasingly complex problems is the source of great economic growth, shared prosperity, and the fulfilment of all human rights. This is not a spectator sport. Ultimately it will be the choices of businesses, governments, and individiauls which determines which path humanity takes.

 

Olly Buston, CEO, Future Advocacy

The field of artificial intelligence (AI), which refers to work processes of machines that would require intelligence if performed by humans, is evolving rapidly and is poised to grow significantly over the coming decade. Proponents believe that the further development of AI creates new opportunities in health, education, and transportation, will generate wealth and strengthen economies, and can be used to solve pressing social issues. Ongoing initiatives are exploring the use of machine learning in human rights investigations, to increase energy efficiency and reduce pollution, and address food insecurity, as a few examples.

On the other hand, replacing human intelligence with machines could fundamentally change the nature of work, resulting in mass job losses and increasing income inequality. Algorithm-based decision-making by companies could also perpetuate human bias and result in discriminatory outcomes, as they already have in some cases. The significant expansion of data collected and analysed may also result in increasing the power of companies with ownership over this data and threaten our right to privacy. 

The rapid growth of AI also raises important questions about whether our current policies, legal systems, business due diligence practices, and methods to protect rights are fit for purpose. This section will feature the latest research and various perspectives about the implications of AI for human rights as this field continues to evolve.

Get RSS feed of these results

Related stories and components

Story
19 May 2020

USA: Amazon allegedly paid nearly $10 million to blacklisted Dahua Technology for thermal imaging cameras to monitor employee temperatures

See full story

Article
28 April 2020

Ranking Digital Rights calls for input on draft 2020 Corporate Accountability Index methodology

Author: Jan Rydzak, Ranking Digital Rights

"RDR opens public consultation on draft methodology for 2020 RDR Index," 15 April 2020...

Read more

Article
28 April 2020

Unchecked use of computer vision by police carries high risks of discrimination

Author: Nicolas Kayser-Bril, AlgorithmWatch

11 local police forces in Europe use computer vision to automatically analyze images from surveillance cameras. The risks of discrimination run high but authorities ignore them... This approach requires that software developers feed large amounts of...

Read more

Article
23 April 2020
+ Français - Hide

Author: Clarisse Treilles, ZDNet

« Des entreprises s'engagent en faveur de l'intelligence artificielle inclusive », 21 avril 2020...

Read more

Story
21 April 2020

Robust human rights due diligence needed to address human rights risks & impacts related to data-driven business conduct, according to new study

The German Institute for Human Rights and Institute for Business Ethics at the University of St. Gallen released a study in April 2020 that explores business and human rights in the data economy. The study maps challenges for human rights protection...

See full story

Article
17 April 2020

Commentary: Increased surveillance is not the answer to stop the spread of COVID-19 in refugee camps

Author: Petra Molnar & Diego Naranjo, The New York Times

"Surveillance won't stop the coronavirus," 15 April 2020...

Read more

Article
13 April 2020
+ Français - Hide

Author: Vuiz

« Algorithmes et traitement automatique : Des lignes directrices du Conseil de l’Europe », 8 avril 2020...

Read more

Story
13 April 2020

Tech companies' surveillance-based business models raise human rights concerns & threaten democracy, says Ranking Digital Rights' report

See full story

Article
7 April 2020

AlgorithmWatch identifies racial bias in Google Vision Cloud algorithm; Google apologises

Author: Nicolas Kayser-Bril, AlgorithmWatch

"Google apologizes after its Vision AI produced racist results", 7 April 2020 ...

Read more

Article
23 March 2020

Govt. use of surveillance tools to trace movements of coronavirus patients raises privacy concerns

Author: Natasha Singer & Choe Sang-Hun, New York Times

"As Coronavirus surveillance escalates, personal privacy plummets," 23 March 2020...

Read more