abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube

The content is also available in the following languages: 한국어

Article

14 Nov 2025

Author:
Sangeun Lee, Hankyung

S. Korea: Draft decree for AI Basic Act spark backlash over limited scope lacking human rights risks perspectives

“Debate intensifies ahead of AI basic Act implementation: “Effectively unregulated” vs “Detrimental to competitiveness””, 12 November 2025

As the South Korean government announced the draft enforcement decree of the basic Act on the Promotion and Trustworthy Development of Artificial Intelligence (AI basic Act), civil society raised concerns that the regulation is “virtually non-existent,” while industry voices pushed back, claiming that regulation would “hinder AI competitiveness.” The Ministry of Science and ICT stated on 12 November that the enforcement decree is open for public comment for 40 days, until 22 December, ahead of the Act’s full implementation on 22 January 2026.

Criticism surrounding the enforcement decree centres on three main issues. First is the definition of “high-risk AI” to which safety management obligations apply, which many argue is excessively narrow. The AI basic Act defines high-risk AI as systems used in ten key sectors—including energy, drinking water, and healthcare—that may have “significant impact on or pose risks to the life, physical safety, or fundamental rights of individuals.” The Act allows for “additional sectors” to be determined through the enforcement decree.

However, the government did not expand the sectors in the draft decree, and the criteria for “significant impact” are limited to a few specific cases.

A notable example excluded from the scope of high-risk regulation is so-called “surveillance AI.” In August, Hyundai Steel sparked controversy by deploying robotic patrol dogs at its Dangjin plant, which were criticised as tools to monitor workers. Yet the system is unlikely to fall within the scope of high-risk AI, and thus may be exempt from obligations such as implementing risk management measures.

According to the government’s decree and accompanying guidelines, surveillance and control technologies that do not pose direct harm to life or physical safety are generally excluded from the high-risk category.

The second major issue is the absence of legal obligations for actors other than AI developers and service providers.

For instance, hospitals using AI in diagnosis, companies deploying it in hiring decisions, and banks applying it in loan assessments are not even required to provide explanations regarding their use of AI. Under the government’s decree and notices, these actors are classified merely as “users,” rather than “user-operators” who would otherwise be subject to legal duties.

The final point of contention is the deferment of penalty enforcement. The AI basic Act permits the imposition of fines following investigation and corrective orders if high-risk or generative AI providers fail to meet safety obligations. However, the government has stated it will “prioritise promotion over regulation,” and intends to delay enforcement of fines by at least one year.

Oh Byung-il, director of the Digital Justice Network, noted, “The EU AI Act outright bans AI systems with high human rights risks—such as facial recognition in public spaces, exploitation of human vulnerabilities, and emotion recognition in workplaces and schools. Yet Korea’s AI basic Act includes no prohibitions whatsoever.”

(…)