Big Tech remains silent about applying a human rights approach to generative AI investment decisions

Velacinema via Canva Pro
There is a growing unease--amongst civil society, governments, and responsible investors--over the lack of adequate human rights due diligence throughout the process of generative artificial intelligence (AI) tool creation, from inception to deployment and operation. These AI tools have the potential to amplify existing biases and discrimination, infringe on privacy rights, and even pose risks to democracy through the spread of misinformation and manipulation. Without proper oversight and accountability mechanisms in place, the unchecked advancement of generative AI could lead to significant societal harm and undermine fundamental human rights principles.
As described by the United Nations Guiding Principles on Business and Human Rights, both companies and investors have a responsibility to respect human rights wherever they operate in the world and throughout their operations. Given the concerns raised by civil society about generative AI, and considering the responsibility of investors to respect human rights, the Business & Human Rights Resource Centre contacted Amazon, Google, Microsoft to inquire about their human rights due diligence processes for investment in generative AI.
In December 2023, the Financial Times reported that Amazon, Google, and Microsoft are outspending Venture Capital firms in generative AI startup investment. In April 2024, the United Kingdom's Competition and Markets Authority announced that it would be scrutinising big tech’s role in AI startups. Microsoft's has committed to invest US $13 billion in OpenAI while Google and Amazon have invested US $2 billion and US $4 billion in Anthropic, respectively. Microsoft also recently announced a US $1.5 billion investment in the United Arab Emirates’ top AI firm, G42, while simultaneously stating that the deal intends to “combine world-class technology with world-leading standards for safe, trusted, and responsible AI, in close coordination with the governments of both the UAE and the United States."
Last year, Amazon, Anthropic, Google, Microsoft and OpenAI all pledged to abide by the Biden Administration’s voluntary commitments for the safe, secure, and transparent development of AI technology. Microsoft, Google and Amazon all also have human rights policies, as well as responsible AI policies. None of the responsible AI policies (from Microsoft, Google or Amazon) clearly explain how human rights due diligence is applied to investment decisions.
The Business & Human Rights Resource Centre sent the companies a series of questions about their investment procedures and gave them three weeks to respond, with the possibility of extension if more time was needed. We also contacted Thrive Capital, a Venture Capital firm mentioned in the article by the Financial Times as a VC continuing to steadily invest in generative AI. None of the contacted companies or firms replied to our information request, which runs contrary to commitments of transparency and stakeholder engagement.
This research is a follow up to the work carried out last year in partnership with Amnesty International. For more information about the lack of human rights due diligence by venture capital firms, read our Silicon Shadows report.