abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

이 페이지는 한국어로 제공되지 않으며 English로 표시됩니다.

의견

25 7월 2023

저자:
Phil Bloomer, Executive Director, BHRRC

Taming the monster: Artificial intelligence & the duty of care

Last November, ChatGPT was released into our world with less safety regulation than a new model of toaster, and seemingly less concern about the harm it might create. Since then, a raft of generative artificial intelligence (AI) apps has been rushed to market.

There is little doubt generative AI can bring enormous benefits to our societies – ranging from new medicines to scientific research. But, like social media apps two decades ago, AI technology is currently released into a Wild West market with no effective regulation to direct its use to social and public benefit, nor to prevent its enormous potential for harm. Governments, companies, investors, unions and civil society are all raising the alarm. The dangers are real and wide-ranging from extreme disinformation to mass surveillance, uncontrolled job losses, child sexual abuse images and gender violence, and discrimination and ballooning fraud.

Amid the fanfare of AI’s hyperbole, and future valuations in trillions of dollars, legislators are scrambling to design laws that show their electors that our democracies have the power to direct new technologies to the common good. But this comes amid dark warnings the ‘genie is out of the bottle’ and regulation is futile, as it can never catch up with exponential technological development.

This is dangerous nonsense from vested interests and ideologues. We have in our hands powerful legal and regulatory tools that require companies demonstrate a ‘duty of care’ in designing and producing their goods so they are safe for release and use. These laws demand companies assess the risk of their products and demonstrate clear efforts to mitigate them, before and after the product is released onto the market (rather than the Sisyphean task of regulating for every eventuality after release). Toaster models are tested; house and sky-scraper design is assessed; new models of car and lorry must meet exacting safety standards: in democratic societies, this regulatory approach in the physical realm usually works well – including where design advances quickly. The same method is available with regard to tech companies and their human rights impacts in the digital realm. And with fast-moving technology, this approach future-proofs our societies’ regulations and the rights of people: it is the companies, that launch and profit from these technologies that must assess the human rights risks of new digital designs and ensure they are safe, or face heavy penalties.

Meta, Google, Microsoft, Apple, and the many smaller tech companies would have to immediately change the calculus of risk in their boardrooms around the development and release of generative AI, and their other technologies. Otherwise, those harmed by irresponsible release (and they will be many) can demand justice and remediation, and administrative authorities will take enforcement action.

The European Union is currently in the final stages of approving perhaps the most powerful and relevant legislation: the Corporate Sustainability Due Diligence Directive (CSDDD). In essence, this does not seek to regulate for every harm companies might create for workers, communities, consumers or society. Rather it demands companies assess likely and severe human rights and environmental risks and impacts their business model generates across their full value chain. They must then take reasonable steps to prevent risks, or end and remedy the harm. If they fail in the duty of care, then they face civil liability risks and costly administrative punishments. This approach now needs to be applied robustly to the digital realm.

Meta, Google, Microsoft, Apple, and the many smaller tech companies would have to immediately change the calculus of risk in their boardrooms around the development and release of generative AI, and their other technologies. Otherwise, those harmed by irresponsible release (and they will be many) can demand justice and remediation, and administrative authorities will take enforcement action. This should also be extended to criminal liability given the scale of harm that can be created, and the need to focus company directors’ minds as much on the dangers as to the potential profits of rash early-release. These duties should extend to investors as they act as crucial gatekeepers in deciding what comes to market.

Responsible tech companies will welcome a duty of care approach. Upstream and continued investment in their human rights and environmental due diligence will soon become far less costly than the price of liability for reckless product releases. And the law creates both a level playing field, preventing reckless firms from under-cutting responsible companies' strategies, as well as legal certainty. Rising public concern about the power and irresponsibility of tech giants is driving active pursuit of legal accountability. Courts, regulators, and politicians are answering this call in greater numbers. Just recently, US regulators announced a sweeping probe into the human harm that ChatGPT may be generating.

The European Union, and its member states which will legislate the Directive, must quickly strengthen the IT approach in the CSDDD to the tech sector. This should include confirming the sector as high risk which leads to medium-sized companies’ inclusion alongside the giants. It should also include the duty of care for companies’ full value chain – including the impact of their products and services. We also urgently need a multilateral approach from the tech powerhouses of EU, USA, China, Brazil, and India to agree coherent legislation for the public good, based on the international human rights standards they are all party to.

Due diligence legislation is powerful but is not a silver bullet for safeguarding human rights in the face of fast paced tech expansion. Other initiatives, such as the EU's proposed AI Act, have the potential to add to the powerful foundation of the CSDDD.

Exponential technological advances await us. Whoever controls these technologies will gain enormous power and wealth. Will that be a tiny elite of tech executives, or our democratic societies? Collectively, we still have the agency to tame the monster to deliver more caring, equitable and informed societies. Insisting now on tech’s duty of care is an immediate way to help create that future for us and for generations that follow.

By Phil Bloomer, Executive Director, Business & Human Rights Resource Centre

개인정보

이 웹사이트는 쿠키 및 기타 웹 저장 기술을 사용합니다. 아래에서 개인정보보호 옵션을 설정할 수 있습니다. 변경 사항은 즉시 적용됩니다.

웹 저장소 사용에 대한 자세한 내용은 다음을 참조하세요 데이터 사용 및 쿠키 정책

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

분석 쿠키

ON
OFF

귀하가 우리 웹사이트를 방문하면 Google Analytics를 사용하여 귀하의 방문 정보를 수집합니다. 이 쿠키를 수락하면 저희가 귀하의 방문에 대한 자세한 내용을 이해하고, 정보 표시 방법을 개선할 수 있습니다. 모든 분석 정보는 익명이 보장되며 귀하를 식별하는데 사용하지 않습니다. Google은 모든 브라우저에 대해 Google Analytics 선택 해제 추가 기능을 제공합니다.

프로모션 쿠키

ON
OFF

우리는 소셜미디어와 검색 엔진을 포함한 제3자 플랫폼을 통해 기업과 인권에 대한 뉴스와 업데이트를 제공합니다. 이 쿠키는 이러한 프로모션의 성과를 이해하는데 도움이 됩니다.

이 사이트에 대한 개인정보 공개 범위 선택

이 사이트는 필요한 핵심 기능 이상으로 귀하의 경험을 향상시키기 위해 쿠키 및 기타 웹 저장 기술을 사용합니다.