abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube
Story

Growing concerns over pitfalls of digital & AI tool use in due diligence

Given recent mushrooming of digital and often AI-powered technology meant to support companies' human rights and environmental due diligence, there is an emerging body of research and commentary looking at such tools, including critical assessment of their adverse effects and how these could be addressed.

Concerns range from the marketing and use of certain software as a one-stop-shop for due diligence, to the risk of tools prioritising sleek dashboards over rightsholder experiences and promoting formalistic (tier 1) compliance rather than quality engagement on salient human rights and environmental risks in value chains.

A blog piece by DigiChain researchers featured in our Mandatory Due Diligence blog series asks:

"[H]ow are technologies integrated into companies’ implementation strategies, how will they affect the outcomes of their sustainability due diligence, how are tools financed and marketed, how can they avoid fuelling checkbox compliance and support rather than undermine risk-based, context-specific and worker- and community-driven approaches [...]?"

In the timeline below, we collect further examples of analysis, guidance, and commentary critically assessing digital and AI-powered tools in due diligence.

Timeline