Human Rights are Universal, Not Optional: Don’t Undermine the EU AI Act with a Faulty Code of Practice - op-ed
The EU AI Act, which came into force on August 1, 2024, initiated a “co-regulatory” process involving a working group of close to 1,000 stakeholders from AI companies, academia, and civil society organizations. This working group is now in the final stages of drafting the General Purpose AI Code of Practice, effectively a detailed instruction manual for how AI developers can comply with key portions of the AI Act setting out rules for models. Developers who follow the manual are afforded a “presumption of compliance” with the Act, though if they want, they can choose to comply in their own ways.
…We are writing here together because we are gravely concerned that the penultimate draft of the Code of Practice is failing to protect human rights. This draft relies on faulty logic that dramatically limits the ways in which AI developers would need to mitigate human rights risks from their AI models…
From the first draft, the Code included a two-tier approach distinguishing between risk categories. However, in the current draft, the second risk category went from simply “additional” to “optional.”…