abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

このページは 日本語 では利用できません。English で表示されています

コンテンツは以下の言語で利用可能です: English, español, français

オピニオン

2024年9月24日

著者:
Druthi Suresh, Intern, Business & Human Rights Resource Centre

The Case for Mandatory Regulation: Jurisprudence Showcases the Need to Move On from Self-Regulation in Tech

pixabay

The past decade has seen evolution of regulatory norms applicable to the tech sector simultaneously with a rising number of human rights violations implicating these companies. The rapid growth of tech firms has often outpaced the development of laws to police them, resulting in a dangerous reliance on self-regulation, courts, and after-the-fact measures as governments struggle to keep up with the pace and harms of the industry.

Today’s tech landscape is consequently defined by a pervasive lack of accountability. Operating on a global scale, tech companies often evade national laws and regulations. When left to police themselves, profit seems to frequently trump rights implications, leading to numerous scandals involving alleged privacy violations, censorship, discriminatory practices, and a wide range of other human rights harms.

Courts and law enforcement agencies are increasingly taking note of company failures in addressing misinformation, enabling surveillance, work exploitation, and infringing on personal privacy – and their consequent harms for society. – and their consequent harms for society.

For instance, in the US, the New Mexico Attorney General filed a lawsuit against Meta’s CEO, Mark Zuckerberg, to address the exploitation of children on the company’s platforms. The case arose from an undercover investigation by the AG’s office, which exposed alarming practices where Meta’s platforms allegedly directed sexually explicit content to underage users, enabled adults to solicit minors, recommended children join unmoderated groups linked to commercial sex, and even allowed the sale of child pornography. Companies have stated that they are deploying content moderation tools and other measures to curb these abuses of their own accord. However, harmful content frequently reappears. This underscores the grave consequences of leaving tech companies, especially social media platforms and big tech, to self-regulate.

Another perturbing example is Booking.com, which allegedly profits from listing rental properties in illegal Israeli settlements, despite these settlements being recognised as unlawful under international law. This practice suggests disregard for international human rights standards and legal obligations. The recent criminal complaint filed in the Netherlands by a number of civil society organisations, argues that by profiting from severe breaches of international humanitarian law, the company is introducing proceeds of crime into the Dutch financial system, effectively engaging in money laundering and violating Article 1(4) of the Dutch International Crimes Act. This, too, highlights that lack of robust regulatory dictates and tech oversight makes such profits possible at the expense of rights.

Some tech companies have attempted to address the lack of clear regulation by establishing their own oversight boards. While these initiatives represent a step toward greater accountability, they cannot replace state mandated measures. Such boards often lack the independence and enforcement power and are frequently criticised for being reactive, addressing issues only after they escalate into public scandals, as in the case of Meta rejecting its oversight board's recommendation to suspend the Facebook account of Cambodia's former Prime Minister Hun Sen, where he was accused of threatening opponents with violence, on procedural grounds. This was despite the real-world consequences of such content - and highlights the limitations of self-regulating bodies in holding powerful figures accountable.

Insufficient regulatory oversight can also fuel conflict, as the case against Cognyte Software Ltd, an Israeli company accused of of selling spyware to Myanmar’s military ahead of the 2021 coup, reveals. Despite Israel’s official stance of halting defence technology transfers to Myanmar, Cognyte proceeded, enabling widespread surveillance of human rights defenders and contributing to the repression of dissent.

As these harms begin to more frequently surface in courts across the world, there appears a growing global awareness of the role of regulation to mitigate and prevent them. The European Union (EU) has taken a leading role, enacting the General Data Protection Regulation (GDPR), the Digital Services Act (DSA), the Digital Markets Act (DMA), and the Artificial Intelligence Act (AIA). These regulations have sought to establish mechanisms and responsibilities based on business and human rights principles to hold tech companies accountable. In Brazil, similar regulations have been enacted, with authorities actively enforcing them to fight abuses facilitated or caused by tech companies.

While these measures are not without flaws, they represent powerful steps in the right direction.

But more is needed, and a roadmap exists for this effort. Labour, antitrust, and consumer laws as well as jurisprudence have played crucial roles in regulating other sectors to protect privacy, ensure labour conditions, and safeguard other fundamental rights. It is unthinkable to allow other industries, like mining, finance, and healthcare to solely rely on self-regulation. The tech sector too should be subject to concrete regulatory frameworks which require compliance on issues ranging from consumer protection to ethical business practices.

The world could benefit from adequate oversight of the tech industry, with regulation grounded principally in business and human rights principles. In other words, in mandatory human rights and environmental due diligence.  Where an industry moves as quickly as the tech sector does, the onus should be on companies themselves to identify and mitigate the core human rights risks of their products – and they must be held accountable for failing to identify and address these risks. Rather than government struggling to keep up by regulating harms ex post facto, due diligence initiatives such as the EU’s Corporate Sustainability Due Diligence Directive (CSDDD), in combination with other, issue-specific tech laws, offer significant promise for a better-regulated industry. industry.

As technology continues to play an increasingly central role in society and in conflicts, human rights risks also grow. Clear, enforceable laws and policies that hold tech companies accountable for their actions, in part by demanding they identify the risks of their products and services for humanity, are needed to protect human rights and ensure that tech companies operate in a manner that aligns with the broader public interest, rather than merely prioritising their bottom line.