The UK's 'online harms' white paper misses the social media threat to freedom of thought
9/7/19 - Daniel Aguirre, Roehampton University and Susie Alegre, Doughty Street Chambers
Data-harvesting internet companies need state regulation to protect human rights, argue Daniel Aguirre and Susie Alegre.
The consultation on the UK government's white paper on 'online harms' closed on July 1st. But while the white paper recognises how the internet can be used to undermine democratic values, its approach treats the symptoms without tackling the underlying cause.
The white paper proposes a legal duty of care on the part of business towards users' rights, overseen by an independent regulator. However, it doesn't deal with arguably the greatest online harm of all: the violation of our freedom of thought, and the failure to regulate a business model of harvesting personal data.
The UK’s approach to regulating the internet does not recognise it as a business and human rights issue despite its commitments to the United Nations Guiding Principles on Business and Human Rights (UNGPs), their implementation through the UK’s National Action Plan, or its policy document 'Just Business'.
The UNGPs explain how the state's duty to protect human rights overlaps with the business responsibility to respect those rights in a globalised world. Its general principles emphasise that international human rights law creates positive obligations for states to protect against human rights abuses by third parties, including businesses.
But in their practical implementation through national action plans, states have focussed on encouraging business to respect human rights recognised in national law (especially in their operations abroad) without considering whether the national law reflects agreed international standards.
The result is that the baseline for implementing the UNGPs becomes the national legal standards, rather than the relevant international human rights law.
'Surveillance capitalism' harvests our behaviour online without our consent or understanding and uses it to predict and influence our future decisions. Every one of our actions - reflecting our personal thoughts - is now used to compile profiles of our wants, needs, desires, emotions, and dreams to create unprecedented personal and group profiles.
This capture of surplus behaviour – unintentionally shared information - creates a lucrative market in which to sell advertising space online.
The power of machine learning means that this data is used not just to predict our behaviour but increasingly to manipulate it for profit and political gain, with clear human rights implications.
Unlike privacy and freedom of expression, international human rights law protects our right to freedom of thought absolutely. This means that there can be no justification for interfering with our freedom of thought – whether for profit, to make us better citizens or to prevent us from behaving badly.
It protects what goes on inside our minds, not our behaviour or our speech, and it includes three elements: the right to keep our thoughts private, the right not to have our thoughts manipulated, and the right not to be penalised for our thoughts.
Algorithmic processing of our data risks interfering with all three aspects of this right, with potentially devastating consequences both for individuals and for democratic societies.
As the Cambridge Analytica scandal revealed, without regulation, the same algorithms designed to identify and influence consumer choices can be used to manipulate political opinions and voter behaviour, undermining democracy and violating freedom of thought.
Yet the online harms white paper neglects that fundamental problem with this business model, focusing instead primarily on content. This approach does nothing to fulfil the duty to protect our thoughts and opinions from extraction, manipulation and exploitation by business.
The state's duty to protect these rights should prompt a review of legal and regulatory frameworks so that they guarantee our rights according to the relevant international human rights standards.
A serious effort to protect our freedom of thought would require a confrontation with some of most powerful companies in the world. The online harms white paper fails to do so and leaves a huge gap in the regulatory framework around the right to freedom of thought.
The UK’s approach to protecting us from online harm is instead couched in business-friendly terms, with human rights issues are constantly balanced against promoting the UK as a centre for a thriving digital economy and tech company investment.
The UK fears divestment, particularly in a post-Brexit world, and believes strict regulation would scare off business. But protecting the right to freedom of thought should not be a balancing exercise.
It is clearly easier for the UK to regulate the negative activities of a few businesses online rather than confront the systemic problems. That means its proposed policy and law focusses incorrectly on the UNGPs business duty to respect some rights without fulfilling the state's duty to protect all human rights.
Regulating to protect our minds from surveillance capitalism might threaten the very business model of the internet itself. Alternative models that allow us to pay our way out of data scraping would only exacerbate inequalities in our society and make freedom of thought a luxury commodity.
But a failure to regulate this business and human rights issue poses a fundamental risk to our personal autonomy and the future of democratic societies, and that is an online harm from which no amount of education and awareness-raising can save us.
Daniel Aguirre is Senior Lecturer at Roehampton University.
Susie Alegre is Associate Barrister at Doughty Street Chambers