abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Esta página no está disponible en Español y está siendo mostrada en English

Comunicado de prensa

13 Dic 2023

Venture capital firms funding generative artificial intelligence ignoring duty to protect human rights

Surveys with the 10 largest venture capital funds and two largest start-up accelerators investing in Generative AI companies revealed hardly any were taking steps to safeguard human rights.

Leading venture capital (VC) firms are failing in their responsibility to respect human rights, especially in relation to new Generative AI technologies, warned Amnesty International USA (AIUSA) and the Business & Human Rights Resource Centre in research released today (13 December 2023). Leading VC firms have refused to implement basic human rights due diligence processes to ensure the companies and technologies they fund are rights-respecting, as mandated by the UN Guiding Principles on Business and Human Rights (UNGPs). This is particularly concerning given the potentially transformative impacts Generative AI technologies could have on our economies, politics and societies.

Michael Kleinman, Director of AIUSA’s Silicon Valley Initiative, said “Generative AI is poised to become a transformative technology that could potentially touch everything in our lives. While this emerging technology presents new opportunities, it also poses incredible risks, which, if left unchecked, could undermine our human rights. Venture capital is investing heavily in this field, and we need to ensure that this money is being deployed in a responsible, rights-respecting way.”

Late Friday 9 December, EU negotiators reached political agreement on the AI Act, paving the way for legal oversight of the technology. The law is considered the world’s most comprehensive on AI so far and will affect companies globally – meaning venture capital firms need to rapidly reconsider their approach. High-risk AI systems, spanning various sectors, must undergo mandatory fundamental rights impact assessments. The European Parliament stated that algorithms having “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law” are considered high-risk, including AI systems that can influence election outcomes and voter behaviour. The Act also grants citizens the right to file complaints and receive explanations for AI-powered decisions that have impacted their rights.

Meredith Veit, Tech & Human Rights Researcher, Business & Human Rights Resource Centre, said “The fundamental rights impact assessment obligation within the new EU AI Act is very welcome, particularly considering its impact on the deployment of undercooked generative AI systems, but with the finer details yet to be finalised it is essential that a human rights based approach shine through in the more specific requirements of the regulation – for both public and private actors. That way investors can make informed decisions considering salient human rights and material risks. And while the EU is advancing important mandatory corporate due diligence legislation in the form of the Corporate Sustainability Due Diligence Directive (CSDDD), which we hope can fill some of the AI Act’s loopholes, it cannot be relied upon to hold all actors within the tech ecosystem to account. Startups developing potentially harmful AI systems, for example, need to be scrutinised through the EU AI Act, since they are not within the scope of the CSDDD.”

Our research

To assess the extent to which leading VC firms conduct human rights due diligence on their investments in companies developing Generative AI, Amnesty International USA and the Business & Human Rights Resource Centre surveyed the 10 largest venture capital funds that invested in Generative AI companies, and the two largest start-up accelerators  most actively investing in Generative AI.

The  VC firms surveyed, all based in the US, were Insight Partners, Tiger Global Management, Sequoia Capital, Andreessen Horowitz, Lightspeed Venture Partners, New Enterprise Associates, Bessemer Venture Partners, General Catalyst Partners, Founders Fund, Technology Crossover Ventures, Techstars and Y Combinator.

This analysis revealed that the majority of leading VC firms and start-up accelerators are ignoring their responsibility to respect human rights when investing in Generative AI start-ups:

  • Only three out of the 12 firms mention a public commitment to considering responsible technology in their investments;
  • Only one out of the 12 firms mentions an explicit commitment to human rights;
  • Only one out of the 12 firms states it conducts due diligence for human rights-related issues when deciding to invest in companies;  and
  • Only one of the 12 firms currently supports its portfolio companies on responsible technology issues.

The report calls for VC firms to adhere to the UNGPs, which stipulate that both investors and investee companies must take proactive and ongoing steps to identify and respond to Generative AI’s potential or actual human rights impacts. This entails undertaking human rights due diligence to identify, prevent, mitigate and account for how they address their human rights impacts.

Kleinman added, “Generative AI can be hugely beneficial, but it can also facilitate physical harm, psychological harm, reputational harm and social stigmatisation, economic instability, loss of autonomy or opportunities, and further entrench systemic discrimination to individuals and communities. This especially applies to Generative AI’s use in high-risk contexts such as conflict zones, border crossings, or when imposed on vulnerable persons. In the current global environment the risks couldn’t be more critical.

“Venture capital firms have an urgent responsibility to take proactive and ongoing steps to identify and respond to Generative AI’s potential or actual human rights impacts.”

Veit concluded, “It is, of course, possible to see the great potential of new technologies when they are designed using a human-centric approach. Unfortunately, the story of Generative AI thus far has largely been one of maximising profits at the expense of people, especially marginalised groups. But it isn’t too late for investors, companies, governments and rights-holders to take back control over how we want this technology to be designed, developed and deployed. There are certain decisions that we should not allow Generative AI to make for us."

###

// ENDS

Notes to editors:

  • Business & Human Rights Resource Centre is an international NGO that tracks the human rights impacts of companies across the globe.
  • Amnesty International is a Nobel Peace Prize-winning global movement of more than 10 million people who campaign for a world where human rights are enjoyed by all. The organization investigates and exposes abuses, educates and mobilizes the public, and works to protect people wherever justice, freedom, truth and dignity are denied.
  • Embargoed copies of the report are available on request.

Media contact: Priyanka Mogul, Media Officer, Business & Human Rights Resource Centre, +44 (0) 7592156010, [email protected]