abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube

Esta página não está disponível em Português e está sendo exibida em English

O conteúdo também está disponível nos seguintes idiomas: English, 日本語

Artificial Intelligence (AI) Accountability

Latest research and perspectives on the implications of AI for human rights

There is a possible future in which artificial intelligence drives inequality, inadvertently divides communities, and is even actively used to deny human rights. But there is an alternative future in which the ability of AI to propose solutions to increasingly complex problems is the source of great economic growth, shared prosperity, and the fulfilment of all human rights. This is not a spectator sport. Ultimately it will be the choices of businesses, governments, and individuals which determines which path humanity takes.
Olly Buston, CEO, Future Advocacy

We address the human rights impacts of artificial intelligence by doing what is most urgently needed: linking documented harms directly to the companies and investors responsible, creating pathways to accountability, and helping to build a vision of human rights-centred AI technology.

While AI offers enormous opportunity for progress, our global database of abuse allegations reveals consistent patterns of harm associated with it, from discriminatory surveillance, environmental impacts, and gender-based violence, to exploitative labour practices across global AI supply chains. We trace these impacts across complex corporate structures to identify responsibility and make this evidence accessible to those positioned to act, including investors, regulators, civil society, and companies.

This work is grounded in a clear expectation: human rights must be a precondition for AI development and deployment, not an afterthought. This requires companies to conduct robust human rights due diligence across their full value chains, engage meaningfully with affected communities, and demonstrate accountability where harm occurs. Human rights-centred AI will require stronger regulation, decisive investor action, and proactive shifts by companies towards responsible conduct.

We contribute to this shift by supporting evidence-based action, centring the experiences of people most affected, particularly in the Global South. Our database shows that AI harms are not isolated or inevitable; they are the result of corporate choices. Different choices can prevent harm and to ensure that AI serves the public good, rather than undermines it. 

Are pensioners funding the militarisation of AI?

Our research with Empower, Open MIC and Heartland Initiative found at least 182 private and public pension funds have invested in companies developing high-risk AI systems. What does this mean for investors and pensioners?