Increasing demand by investors for corporate accountability from tech companies
With mobile banking, remote learning, healthcare services, greater citizen participation, and coordination of democratic movements through social media platforms, tech companies wield great power to enable human rights and sustainable development. However, investors are also attuned to a parallel reality – one where the unchecked power of technology companies can also contribute to an erosion of respect for the digital rights of users and society at large.
Companies have a responsibility to respect human rights under the UN Guiding Principles for Business and Human Rights (UNGPs), and investors continue to urge companies to prioritize human rights protections in their policies, practices, and business models. The Business and Human Rights in Technology Project (that considers how to apply the UN Guiding Principles on Business and Human Rights to Digital Technologies) has explicitly raised the issue of human rights risks in tech business models. While technology companies are some of the most profitable companies in the world, responsible investors are clear in their expectations: companies should focus on long-term value creation that benefits all relevant stakeholders of the business, including employees, users, communities and society, as investors prioritise sustainable financial returns.
Looking ahead to 2023, investors want to see real change in the tech sector. Below are key issues we expect to see in 2023 and affiliated investor strategies we expect to see featured in discussions on how Big Tech respects and protects the rights to freedom of expression, privacy, and anti-discrimination online, collectively known as digital rights:
- Tackling Big Tech’s advertising revenue business model
Tech companies (especially digital platform companies) have built their business model on surveillance-based, targeted online advertising enabled by algorithmic systems, with advertising revenues accounting for the majority of companies’ profits. These systems use pervasive online tracking and behavioural profiling as the basis for targeted advertising and have created a global system of commercial surveillance which is monetised by companies, yielding record-breaking revenues. However, targeted advertising has been criticised for harms caused to users, societies, and the larger economy. In addition, Big Tech’s reliance on a single source of revenue – advertising – is a material risk to investors, which will also be impacted by legislative developments in Europe and the United States poised to severely restrict targeted ads.
- Escalating engagement at Big Techs’ annual shareholder meetings
Despite persistent outreach by responsible investors, Big Tech companies are largely failing to take action to limit human rights harms caused by their business models. Public disclosure addressing the digital rights concerns of stakeholders remains limited. Investors point to examples when companies declined to meet with shareholders, or where big questions around human rights issues remain unanswered. For example, a group of investors led by Mercy Investments and supported by NEI Investments have re-filed a shareholder proposal to be voted on at Meta’s upcoming 2023 Annual General Meeting (last year this proposal received majority support from non-inside shareholders), urging the board of directors to publish an independent third-party Human Rights Impact Assessment (HRIA), examining the actual and potential human rights impacts of Facebook’s targeted advertising policies and practices throughout its business operations. A robust HRIA will enable the company to better identify and prevent human rights abuses. This is just one of many proposals addressing the lack of accountability by Big Tech for their adverse impact on people.
- Support for digital rights regulations
Investors have called for rights-respecting internet regulation aimed at creating a safer digital space where the fundamental rights of users are protected and to establish a level playing field for business. There are a plethora of tech regulations being debated across the globe. Such legislation includes the Algorithmic Justice & Online Transparency Act and the Algorithmic Accountability Act in the US, and the Artificial Intelligence Act in the EU. We are observing a trend of investors supporting the growing global consensus among civil society experts, academics, and policymakers on the need for strong regulation on digital rights issues. Much of this regulation features the need for human rights due diligence on the part of Big Tech companies – a tool for evaluating human rights risk which has garnered support from responsible investors.
- Access to remedy for affected rightsholders
Where tech companies have contributed to or caused harm, they should take action to provide remedy for victims. It gets complex with the global reach of services provided by Big Tech which goes beyond geographic boundaries and where victims are often limited to domestic protections and access to remedy. Every digital platform should have channels through which victims can seek remedy when a platform causes or contributes to harm and yet big tech companies are still failing to prioritise remedy mechanisms. Investors are pushing companies to engage with rightsholders affected by business operations and a key path for this engagement would be to focus on the provision of remedy to victims.
Lydia Kuykendal, Director of Shareholder Advocacy at Mercy Investment Services; Michela Gregory, Director - Environmental, Social and Governance (ESG) Services at NEI Investments both members of the Investor Alliance for Human Rights; together with Anita Dorett, Director at the Investor Alliance for Human Rights