Opinion: 'Why we need critical analysis and a human rights-centred approach when using AI tools in due diligence'
The recent UN Working Group on Business and Human Rights report on AI procurement and deployment highlights the urgent need for companies to conduct human rights due diligence (HRDD) on artificial intelligence [...]. This responsibility extends not only to AI systems related to core operations, services and business relationships, but also to AI-powered tools deployed to support HRDD efforts. These tools can present a range of risks depending on the context in which they’re used, including automating potentially harmful decision-making or misinterpreting and overlooking human rights risks. This is why it is essential to take a human rights-centred approach to their procurement and use, and ensure the inputs and outputs are critically assessed.
The rise of AI for human rights and environmental due diligence (HREDD)
Regulatory pressures, including due diligence laws and forced labour bans, are driving companies to embed AI-powered tools within their due diligence processes. Governments and civil society are also increasingly using these tools to support enforcement, risk monitoring and investigations [...].
Adopting critical thinking and a human rights-centred approach
AI-powered tools can be crucial enablers for effective due diligence processes when deployed with thoughtful planning, critical thinking and a human rights-centred approach. They must complement – not replace – essential processes like stakeholder engagement for qualitative data, expert judgement and contextual analysis. Involving trained human rights practitioners will help companies benefit from AI while more effectively preventing and mitigating risks, such as the potential oversight of labour risks in the AI supply chain and automated discriminatory outcomes linked to biased datasets.
Critical questions companies can ask
Companies can start by asking the following critical questions to prevent or mitigate adverse effects from deploying AI-powered tools for HREDD:
Why are we using this tool? How well does it fit our needs?
Companies may find it useful to clarify the purpose of using the tool, the value it may bring, and the issues it’s meant to address. [...]
What is the underlying methodology? What data is being inputted?
AI models rely heavily on the quality of training data. If the data is incomplete, biased or out of date, the results can be flawed – and potentially harmful. Companies can make better-informed decisions by gaining a clear understanding of the tool’s methodology, [...] associated human rights risks and critical blind spots.
How are we interpreting the outputs? What will we do with them?
Responses to the findings from AI tools must be human-rights compatible. Treating outputs as starting points for further human analysis – not final judgements – and combining them with thorough review, data cross-checking and stakeholder engagement, will lead to more reliable outcomes. [...]
Looking forward
Practitioners, civil society and regulators should be wary of the potential cumulative impact of large-scale use of AI-powered tools in HREDD. Without safeguards, these tools risk accelerating disengagement over meaningful engagement and mitigation, tempting companies to ‘clean’ their supply chains rather than using leverage to address flagged issues. To avoid this, companies should ground their approach in the UNGPs, which provide a globally recognised, risk-based framework for adopting a rights-respecting approach to the use of such tools throughout their lifecycle.
To harness AI-powered tools’ potential to support HREDD without compromising human rights, companies must apply critical thinking, maintain human oversight and ensure a UNGPs-aligned approach. [...]