abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

هذه الصفحة غير متوفرة باللغة العربية وهي معروضة باللغة English

مقالات الرأي

1 ديسمبر 2020

الكاتب:
Ashley Nancy Reynolds,
الكاتب:
Business & Human Rights Resource Centre

Human rights due diligence within the tech sector: Developments and challenges

This blog is part of our series on Beyond Social Auditing.

In August 2017, the Myanmar military launched a campaign of ethnic cleansing against the Rohingya, resulting in more than 10,000 deaths, widespread sexual violence, and 730,000 refugees.

More than a year after the violence began, Facebook released the findings of a human rights impact assessment that determined the social media platform was used to “foment division and incite offline violence” in Myanmar, as UN investigators and human rights had argued. BSR (Business for Social Responsibility) detailed how Facebook employees failed to stop misinformation, hate speech and inflammatory posts that were part of a systematic campaign to target the Rohingya.

This is not the first time technology and social media companies have been involved in human rights abuses: from exacerbating racial discrimination and facilitating ICE raids to contributing to the arrest of peaceful protesters in Russia and spreading hate speech against LGBT+ people in the MENA region.

While the United Nations has recognized the role of new technologies in the realization of economic, social and cultural rights, it also acknowledges digital technologies can further exacerbate inequality, restrictions on freedom of expression and discrimination, and urges tech companies to conduct due diligence and impact assessment.

Although it is clear technology companies can have far-reaching and devastating impacts, it is more challenging for human rights due diligence efforts to fully capture these impacts, despite emerging attempts to do so. But will they be able to fully capture the myriad and widespread effects that technology and digital activities have on human rights?

Challenges to a Tech-Sector HRIA

“In the past couple of years, it has become evident that tech companies must evolve and improve their due diligence practices in order to adequately respect human rights,” explained Emil Lindblad Kernell, Adviser on Human Rights and Business at the Danish Institute for Human Rights. “Companies must ‘know’ their impacts and ‘show’ they have processes in place to address them. Human rights impact assessments can be a key due diligence tool in those efforts. The digital transition of societies all over is at full speed and the negative impacts will follow unless adequate measures are in place. There is little time to waste.”

Human rights impact assessments (HRIAs) identify actual and potential human rights implications of business projects and activities. For instance, a company operating a factory might find that while it has positive impacts on the right to work and the right to housing, it also has severe negative impacts on the right to water, the right to form and join unions, and the right to an adequate standard of living. Although a number of methodologies and guides on HRIA are currently available, very few specifically target the digital sphere and its unique characteristics.

Assessing the impacts of technology (particularly digital products and services such as social media platforms) presents a number of challenges. Firstly, technology evolves quicker than the law, making exact obligations and challenges difficult to navigate. “Revenge porn” and “upskirt” photos were widespread long before any legal developments made them a crime. This raises questions of responsibility and culpability not only for the uploader, but for the platform hosting and potentially spreading the content.

Application and enforcement of laws is also extremely difficult, due to the global nature of the internet and the difficulty in holding users across borders accountable for violations. Effectively monitoring for hate speech, for instance, would require a large number of staff proficient in many languages. Additionally, different cultures may have different understandings of rights such as free speech. In these cases, which rights, and which understandings, are tech companies expected to adhere to? Additionally, it can be difficult to predict how a new technology will be used or what effects it will have, especially if that technology has little precedent.

A Way Forward

But progress is being made. A few days ago, the Danish Institute for Human Rights released its guidance on human rights impact assessments of digital activities. It has also released a guidance document on addressing digital technologies in National Action Plans on Business and Human Rights.

According to the Institute, in order to be able to conduct in-depth HRIAs in the tech sector, assessments must be adequately scoped to a particular country/regional context, product, and/or user base. Rightsholder engagement is essential and adequate resources and time must be dedicated to engaging with those who might be impacted by the technology in question.

Other organizations weighing in include the Global Network Initiative (GNI), which recently hosted a panel on human rights due diligence in ICT. JustPeace Labs published a report on conflict sensitivity for the tech industry, identifying risks including the weaponization of social media, facial recognition and state surveillance, and AI-driven warfare.

Consulting firms such as Article One and BSR have conducted several assessments for tech-sector companies, including Google, Yahoo, Facebook, and Intel. Following an assessment of its salient human rights issues, Microsoft commissioned a HRIA of its artificial intelligence technology. While some of these assessments are publicly available, many are fully or partially confidential.

“The tech industry has spent the last few years grappling with the unintended consequences of its products and services,” said Chloe Poynton, Co-Founder of Article One. “The industry’s embrace of HRIAs shows the value of the global human rights framework, which allows the management of borderless technology to be grounded in internationally recognized norms. At the same time, HRIAs provide the ability for tech companies to understand the unique experiences of rightsholders in specific locations and ensure those insights are brought back to engineers and corporate policy teams to better mitigate risks in an ongoing basis. Our hope is that more technology companies embrace the UNGP framework and work to prioritize the experience of rightsholders, ensuring the products they develop contribute to a better, more open world.”

ما وراء التدقيق الاجتماعي

مقالات الرأي

French case law confirms necessity to reassess the weight given to audits in business and human rights court cases

Laura Bourgeois, Litigation and advocacy officer at Sherpa & Clara Grimaud, Legal intern at Sherpa 26 مارس 2024

مقالات الرأي

Is the Auditing and Certification Industry Fit for Human Rights Due Diligence?

Hannah Shaikh, Canadian lawyer and LLM Candidate at NYU School of Law, and Claudia Müller-Hoff, German lawyer and Senior Legal Advisor at ECCHR’s Business and Human Rights Program. 25 أغسطس 2021

View Full Series