South Korea Enacts Global Milestone Of AI Safety Laws Including Covering Mental Health Impacts

In today’s column, I examine the newly enacted set of AI laws that South Korea has established, which were put into law on January 22, 2026, and are generally referred to as the AI Basic Act.

The longer name is the Basic Act on the Development of Artificial Intelligence and the Establishment of a Foundation for Trustworthiness (that’s the roughly equivalent English translation). This fuller name is a relatively apt description of what the AI laws are intended to do. You might be surprised to know that this is the first set of comprehensive AI laws to be adopted as a country-wide regulatory framework by a single major country. It is a notable milestone.

In some ways, the AI Basic Act is similar to the EU AI Act, but in other ways it is starkly different. Much of the regulatory attention has to do with governing AI safety and especially dealing with the advent of generative AI and large language models (LLMs). There are provisions associated with deepfakes and the use of AI to spread misinformation. There are also provisions concerning AI and mental health, though the coverage on mental health is modest or sparse in comparison to state-level laws in the United States that specifically aim at AI mental health consequences (such as in Illinois, Nevada, Utah, etc.).

My emphasis will be to broadly identify the major contours of South Korea’s AI Basic Act, and additionally home in on the pertinent mental health provisions.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

The Current Situation Legally

Some states have already opted to enact new laws governing AI that provide mental health guidance. For my analysis of the AI mental health law in Illinois, see the link here, for the law in Utah, see the link here, and for the law in Nevada, see the link here. There will be court cases that test those new laws. It is too early to know whether the laws will stand as is and survive legal battles waged by AI makers.

Congress has repeatedly waded into establishing an overarching federal law that would encompass AI that dispenses mental health advice. So far, no dice. The efforts have ultimately faded from view. Thus, at this time, there isn’t a federal law devoted to these controversial AI matters per se. I have laid out an outline of what a comprehensive law on AI and mental health ought to contain, or at least should give due consideration to, see my analyses at the link here and the link here.

The situation currently is that only a handful of states have enacted new laws regarding AI and mental health, but most states have not yet done so. Many of those states are toying with doing so. Additionally, there are state laws being enacted that have to do with child safety when using AI, aspects of AI companionship, extreme sycophancy by AI, etc., all of which, though they aren’t necessarily deemed as mental health laws per se, certainly pertain to mental health. Meanwhile, Congress has also ventured into the sphere, which would be a much larger aim at AI for all kinds of uses, but nothing has gotten to a formative stage.

That’s the lay of the land right now.

South Korea Enacts AI Basic Act

It is abundantly useful to explore how countries outside the U.S. are devising AI laws. Some elements might be useful in crafting U.S. laws; other elements might be categorically unsuitable. I’ve previously analyzed a wide range of national and international AI laws, including laws in China (see the link here), the European Union AI Act, the United Nations recommendations dealing with international AI governance, and the like.

I am going to use an English translation of South Korea’s AI Basic Act for this analysis.

First, the overarching stated purpose of the AI Basic Act is as follows:

  • “This Act aims to establish a new framework for artificial intelligence (AI) in the Republic of Korea by prescribing the fundamental matters necessary to support the sound development of AI and to establish a foundation for trustworthiness in AI society, thereby protecting the Rights and Interests of the People and their dignity, contributing to improving the quality of life of the people, and enhancing national competitiveness.”

This indication is on par with that of other sets of AI laws, namely that the usual stated purpose is to protect humankind when it comes to what AI does or is made to do. A phrase in the AI field is that the goal of modern-era AI should be to devise human-centered AI. The gist is that AI should align with human values and support humanity, rather than undercutting humans.

There is a sovereignty consideration associated with most national AI regulations. For example, in this case, the stated purpose mentions that AI should enhance the national competitiveness of South Korea. Here’s the big picture. Many believe that the world is amid a massive AI race to see which country or countries will achieve truly advanced AI and presumably outcompete others accordingly. For my in-depth assessment of the AI race and corresponding sovereignty implications, see the link here, the link here, and the link here.

Upkeep Timeframe, Oversight, AI Stratification

The AI Basic Act establishes a special National AI Committee that will oversee the law and aid in resolving issues:

  • “To deliberate and resolve matters concerning major policies for the promotion of the AI industry and the establishment of a foundation for trustworthiness, a National AI Committee shall be established under the President, and the National AI Committee shall deliberate and resolve matters concerning the establishment of the AI Basic Plan, the promotion of AI utilization, the regulation of High-Impact AI, and other related matters.”

Some AI laws opt to predefine a means or mechanism to aid in interpreting and updating the laws. This would seem to make sense since AI is rapidly changing; meanwhile, static laws often become rapidly out-of-date. The downside of identifying and utilizing such a means or mechanism is that the law can become an uncertain target. Those who eye the law might be wary that changes will be made, and ergo, they suddenly no longer comply. It’s a tradeoff.

Every three years, there will be an effort to review and renew the law:

  • “The Minister of Science and ICT shall establish and implement the AI Basic Plan every three years to promote AI technology and the AI industry and to strengthen national competitiveness, and the Plan shall be deliberated and resolved by the National AI Committee and include matters concerning the basic directions of AI policy, the cultivation of professional talent, and the establishment of a foundation for trustworthiness.”

Similar in some respects to the EU AI Act, there is an attempt to distinguish AI impact levels and thus stratify what the new AI laws apply to in particular cases:

  • “The Act stipulates the duty to ensure AI transparency and the duty to ensure AI safety, and sets forth the duties of business operators related to High-Impact AI where such systems may significantly affect, or pose risks to, life, physical safety, and basic rights.”

In this set of AI laws, there is only a distinction of “High-Impact AI” and no additional delineations, such as potentially stratifying on low, medium, and high. Some contend that only having a high level is insufficient and forces AI that is actually a medium level to be shoved into the high category when it shouldn’t belong there. Or that a medium-level AI that should be found within high-impact manages to slip out and avoid the high-level restrictions. Defining AI levels is a difficult choice when devising AI laws and varies quite a bit across jurisdictions and geography.

Also, note that the definition in the AI Basic Act of what constitutes High-Impact AI is quite convoluted and likely to allow for a wide range of legal debate when it comes to specific instances that are claimed to be within the scope of High-Impact AI. It is going to be a legal loosey-goosey conundrum.

Legal Duties As Per The New Law

The AI Basic Act lays out four primary legal duties:

  • (1) “AI technology and the AI industry shall develop in a direction that enhances safety and trustworthiness, thereby improving the quality of life of the people.”
  • (2) “An Affected Person shall have the right to receive, to the extent technically and reasonably possible, a clear and meaningful explanation regarding the key criteria and principles used in deriving the final outcome of AI.”
  • (3) “The national and local governments shall respect the creative autonomy of AI business operators and strive to foster a safe environment for the use of AI.”
  • (4) “The State and local governments shall respect the creative autonomy of AI Business Operators and shall strive to foster a safe environment for the use of AI.”

The remainder of the new law is somewhat lengthy and has a wide array of provisions, including but not limited to:

  • Formulation of an AI Basic Plan
  • Formulation of the National AI Committee
  • Formulation of an AI Policy Center
  • Formulation of an AI Safety Institute
  • Governmental support for R&D on various AI projects
  • Governmental support for establishing AI standards
  • Establishment of policies for AI “learning data”
  • Promoting the introduction and expansion of AI in enterprises and public institutions
  • Promote the participation of SMEs
  • Promotion of AI startups
  • Promotion of AI convergence between the AI industry and other industries
  • Governmental support for international cooperation on AI
  • Aid in establishing AI clusters for AI R&D
  • Establish and operate AI “demonstration infrastructure” to support demos, testing, verification, and certification of AI
  • Promote policies for AI Data Centers
  • Establish the Korea AI Promotion Association
  • Establish and publish AI Ethical Principles
  • Criminal Penalties
  • Administrative Fines
  • Etc.

By the internal numbering, there are 43 indicated Articles and an addendum containing an additional three Articles. The addendum provides an important indication about the timing of when the AI Basic Act comes into active status: “This Act shall enter into force one year after the date of its promulgation. However, the provision of Article 2(4)(d) concerning digital medical devices shall enter into force on January 24, 2026.”

The Overall Gestalt

The AI Basic Act is a proverbial good news and bad news affair.

On the good news side of things, it contains just about everything, including perhaps the kitchen sink. If you were trying to put together a set of AI laws and wanted to come up with a lengthy laundry list, the AI Basic Act has much of what you would likely conceive of. The bad news is that it is all pretty much broadly stated and lacks pinned-down specifics.

Anytime a law is non-specific, it is surely asking for trouble. The eye of the beholder determines what is inside scope and what is outside scope. Those devising and fielding AI are potentially going to find themselves in a guessing game of whether they are within the generalized assertions or beyond them. I noted earlier that the meaning of High-Impact is already a foggy aspect and a bewildering morass.

Another example of what might constitute a violation of the AI Basic Act is whether an output from generative AI is suitably labeled by an AI maker, which comes under Article 31 “Obligation to Ensure AI Transparency” and is stipulated in clause #2 as follows:

  • “Where providing GenAI or products or services utilizing GenAI, AI business operators shall clearly indicate to users that the outputs are generated by GenAI.”

Getting AI makers to ensure that their AI outputs are labeled as being produced by AI is considered a valuable proposition since people would then know that if the content gets posted somewhere online, it came from AI. But this is problematic due to the label being easily stripped off. Also, what exactly must the label say to comply with this clause? Can the label say “Made by AI” or perhaps use a tricky acronym like “AIMT” (AI Made This)?

The clause is unclear on this.

Furthermore, if just one AI output from one generative AI that is made by an AI maker somehow doesn’t contain a label, does Article 43 on Administrative Fines kick into gear?

Per Article 43, “An administrative fine not exceeding 30 million Korean won shall be imposed on any person who falls under any of the following subparagraphs.” An AI maker could presumably be fined up to 30 million won (currently around $21,000) for one instance. Multiply that by perhaps millions of users, times the scale of hundreds or thousands of outputs, and you suddenly have a quite staggering penalty.

It’s just unclear and widely open to interpretation.

AI And Mental Health Provisions

I am sadly disappointed to indicate that the AI mental health provisions are equally sparse and vague.

For example, in Article 27 on AI Ethical Principles, the opening clause says that the government might establish and publish AI ethics principles, and if so, it would include the following aspects:

  • (1) “Matters concerning safety and trustworthiness of AI to ensure that the development and utilization of AI does not cause harm to human life, physical well-being, or mental health.”
  • (2) “Matters concerning accessibility to ensure that all people may freely and conveniently use products and services powered by AI technology.”
  • (3) Matters concerning the development and utilization of AI that contribute to human well-being and prosperity.”

The first element broadly calls for ensuring that AI doesn’t cause harm to human life, physical well-being, or mental health. That aspect is not much to hang your hat on. Until or if the government decides to derive the AI Ethical Principles, we don’t know what they have in mind regarding protecting people from the adverse impacts of AI on mental health. Indeed, there might not be anything of substance, or potentially given a short shrift of handwaving.

This is a far cry from the specifics found in the US state laws that I analyzed as noted earlier. Those laws are specified. I am not saying that we can necessarily agree with what has been specified, and in fact, great debate can be had, but at least there is something there that tries to draw a line. Is some line better than no line? Admittedly, that’s a hard call, depending on whether you like the line and whether you prefer to be fluid and not have any line drawn at all.

The World That We Make

Time will tell as to what the AI Basic Act truly portends. Will the call for the establishment and promotion of the many aspects be undertaken? Will there be meat put onto the bone, including providing specifics? Will the specifics be light-handed or heavy-handed? We will need to wait and see.

Let’s end with a big picture viewpoint.

It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

We need to decide whether we need new laws or can employ existing laws, or both, and stem the potential tide of adversely impacting society-wide mental health. The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.

The famous philosopher and social theorist Theodor Adorno made this remark: “Vague expression permits the hearer to imagine whatever suits him and what he already thinks in any case.” Whatever AI laws are to be made, it would likely be suitable to ensure they are specific and understandable, rather than vague and indeterminate. Let’s make sure the playing field is leveled out.

Source: https://www.forbes.com/sites/lanceeliot/2026/01/30/south-korea-enacts-global-milestone-of-ai-safety-laws-including-covering-mental-health-impacts/