abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Esta página não está disponível em Português e está sendo exibida em English


26 Jul 2022

Christine Chow, ICGN Board Member

Could surveillance technologies make us boiling frogs?



Whether we like it or not, society will be increasingly exposed to the risk of hidden manipulation through surveillance technologies. Without consent, surveillance can be a violation of our basic human right to liberty and the right to freedom of opinion and expression – however, with informed consent, the story could be completely different. For example, in the case of healthcare apps, which when used appropriately can improve the timeliness, accessibility and affordability of health advice.

Without more information about the surveillance capability built into the technologies we use, it would be difficult to decide what type and what level of surveillance is acceptable. And there are salient risks for investors who perpetuate the normalisation of surveillance practices through the support and use of technologies used for these purposes.

This blog will highlight two specific use cases – surveillance at the workplace and in gaming.

Surveillance at the workplace

The use of people analytics accelerated during COVID-19. When the pandemic's lockdowns and social distancing measures forced workplace interactions to be conducted remotely, communication became more digital and electronic. While these formats allow for the convenience of recording and re-watching meetings and webinars, it also allows for them to be analysed by artificial intelligence (AI) algorithms. Companies which use eyeball and gesture tracking technologies to measure attention spans and sentiment may suggest the data will be used to strengthen corporate culture – but how does it really work and could scores be contested or validated by those being measured?

A recent UK government inquiry into AI at work led to the publication in November 2021 of the report The New Frontier: Artificial Intelligence at Work. The report finds AI offers invaluable opportunities to create new work and improve the quality of work – if it is designed and deployed with this as an objective. However, this potential is not currently being realised. Instead, a growing body of evidence points to significant negative impacts on the conditions and quality of work across the country. Pervasive monitoring and target setting technologies are associated with pronounced negative impacts on mental and physical well-being as workers experience the extreme pressure of constant, real-time micro-management and automated assessment. A core source of anxiety is a pronounced sense of unfairness and lack of agency around automated decisions that determine access or fundamental aspects of work. The challenges identified lie between data protection, labour and equality laws.

The Workforce Disclosure Initiative (WDI) of ShareAction introduced a new question in 2021 of its workforce survey, asking companies to ‘describe any workforce surveillance measures used to monitor workers, and how the company ensures this does not have a disproportionate impact on workers’ right to privacy.’ The results are shared in the ‘Investors’ Expectations on Ethical AI in Human Capital Management’ white paper. On the plus side it found companies did not use the most intrusive forms of surveillance, such as home video surveillance and screen recording. However, too few companies are involving workers in their surveillance measures. Without informed consent, workplace surveillance does not meet investors’ expectations on situations where basic human rights need to be protected.

Metaverse and gaming for leisure

According to Fortune Business Insights, the global gaming market is projected to grow from US$229bn in 2021 to US$546bn in 2028. Metaverse, to be powered by AI, is expected to both make games more engaging through AR and VR, and also lead to a more personalised experience – and monetisation.

Meanwhile, the World Health Organisation (WHO) added ‘gaming disorder’ to its medical reference book, International Classification of Diseases. The American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-5) lists ‘internet gaming disorder’ as a proposed condition. Gaming disorder or addiction is therefore considered a possible mental health issue and therefore requires the attention of those concerned about the right to health, or SDG3: Good health and well-being.

Although a survey which leveraged the Reddit and Amazon platforms conducted in December 2021 found 77% respondents believe the Metaverse could cause harm to modern society due to addiction to a simulated reality, privacy and mental health issues, the survey also highlighted that we should not rule out the potential benefits, such as new business opportunities, increasing creativity and imagination, and introducing new experiences and improving experiences without taking extreme risks.

Similar to surveillance in the workplace, gamers should be informed of the type of surveillance that has been put in place to track them and potentially influence their behaviour. Gamers should have full access to their diagnostics if eye tracking or other motion detection technologies are used in assessing their ‘health status’, and such applications should be based on informed consent.


The title of the article references the story of the boiling frog, a metaphor which warns people of the danger of not noticing gradual changes and suffering consequences for it. In both our work and personal life, surveillance is becoming more intrusive at a gradual pace, and our eyes can be blinded by the benefits that come with it, without adequately considering the risks. However, we should be careful not to discount these benefits without considering the nuances of the associated technology.

There are positive and negative consequences of AI surveillance technology, and investors should consider the impact these may have on their investment outcomes. Investors should engage with companies to ensure accountability and transparency of AI surveillance technology, for without appropriate disclosure and explanation, subjects being ‘measured’, such as employees and gamers, could become victims of rapidly advancing technology by suffering from lacking in agency.

by Christine Chow, International Corporate Governance Network (ICGN) board member


I would like to thank Karin Halliday, ESG Investment Specialist - Australia and Pascal Knowles for their comments in this blog.

Bridging the Human Rights Gap in ESG


Deciphering EU regulation on finance and human rights - the sum and its parts

Signe Andreasen Lysgaard, Danish Institute for Human Rights 25 Nov 2022


Conflict and modern slavery: The investment perspective

Stephanie Williams, Sustainable Investment Analyst, and Katie Frame, Active Ownership Manager, Schroders 30 Ago 2022


The housing crisis: Key priorities for investor action

Paloma Muñoz Quick (Associate Director) and Cassie Collier (Manager), BSR 29 Jun 2022

View Full Series