UN High Commissioner for Human Rights calls for attentive governance of AI risks, focusing on people's rights
"Türk calls for attentive governance of artificial intelligence risks, focusing on people’s rights", 30 November 2023
The emergence of generative AI presents a paradox of progress. On one hand, it could revolutionize the way we live, work, and solve some of our most complex challenges. On the other, it heightens profound risks that could undermine human dignity and rights. This makes it crucial to ensure that human rights are embedded at the core throughout the lifecycle of AI technologies, with a concerted effort by Governments and corporations to establish effective risk management frameworks, and operational guardrails.
I am increasingly alarmed about the capacity of digital technologies to reshape societies and influence global politics. ...It is essential that we stand as an unassailable pillar against disinformation and manipulation.
There must be a comprehensive evaluation of the multiple fields in which AI could have transformative impact – including potential threats to non-discrimination, political participation, access to public services, and the erosion of civil liberties. This is why I am pleased to see the release today of the B-Tech 'Taxonomy of Generative AI Human Rights Harms', which can contribute to broader understanding of current and emerging risks.
Above all, generative AI needs governance. And that governance must be based on human rights. It also needs to be able to advance responsible business conduct, and accountability for harms that corporations contribute to.
The UN Guiding Principles on Business and Human Rights, and the OECD Guidelines for Responsible Business Conduct – both of which are widely in use – offer robust guardrails for States and corporations, and set the stage for responsible development of AI. But the UN Guiding Principles and OECD Guidelines will not alone be sufficient to address the challenges posed by AI. Potential misuse of AI technologies by States, or by criminal gangs, require a range of legal, regulatory, and multilateral frameworks. All of these need to be anchored in international human rights norms, including the standards that have already established the human rights responsibility of businesses and investors.
Currently, we're seeing wide recognition of the need for AI governance – but the multiple policy initiatives underway are mostly inconsistent, and they frequently fail to give human rights the appropriate emphasis. This risks leading to a fragmented regulatory landscape, with varying definitions of ethical conduct and acceptable risk.
A structured initiative like the B-Tech Generative AI Project can provide a clearer understanding of AI's potential human rights impacts, and clarity about the action needed from States and companies – lighting the road to more coherent governance.
Generative AI is not a local or national phenomenon. It will have impact on everyone – and it demands a global, collaborative approach. We need to make sure that protecting people's rights is at the centre of that approach. This requires not just dialogue, but action – action that draws upon the collective wisdom and guidance of established frameworks.