Seeing clearly the big picture when it comes to the emerging slate of new laws and regulations on AI for mental health.
getty
In today’s column, I continue my ongoing analysis of how AI for mental health is being regulated. Here’s the focus on this occasion. Three distinct perspectives or beliefs about AI for mental health are at the root of emerging laws that encompass AI for mental health. I’ll lay out for you the three approaches in detail.
First, a quick heads-up will give you upfront context.
One key viewpoint by policymakers and lawmakers is that AI for mental health should be very tightly controlled, possibly even outrightly banned. A differing viewpoint at the opposite end of the spectrum is that such laws should be highly permissive. The aim is to let the marketplace openly determine the limits and constraints concerning AI for mental health.
A third position is said to be in a moderate middle ground that seeks to encompass just enough restrictions but not too many restrictions. This might be labeled as the Goldilocks variation. The porridge shouldn’t be overly hot or overly cold. It ought to be just right.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.
This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice.
Banner headlines in August of this year accompanied a lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm.
For the details of the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.
The Saga Of Regulatory Oversight
Regulators are starting to catch up with the advent of AI for mental health.
I have done a series of in-depth assessments of the mainstay AI mental health laws at the state level, consisting of the recently enacted Illinois law, see my analysis at the link here, the Nevada law, see my analysis at the link here, and the Utah law, see my analysis at the link here.
None of these latest state laws is comprehensive in its breadth of AI mental health considerations. Gaps exist. This means that the public and AI makers are in a lurch regarding what is permitted versus what is restricted.
There isn’t yet an equivalent federal law. An existing concern is that each state will craft its own idiosyncratic laws, creating a confounding and conflicting patchwork of such laws. A federal law could presumably provide an across-the-board standardized approach. Though numerous attempts at forging a federal law encompassing AI for mental health have been undertaken, the matter is still unresolved and appears to be stuck in limbo for now.
Bottom line is that policymakers, regulators, lawyers, and even AI experts are frequently ill-informed about how to best devise AI-related laws, especially in the case of AI for mental health. Many tough issues need to be wrestled with. Society can benefit from AI that provides mental health guidance, yet society can also be harmed, occurring at a massive scale. Heated controversy is the norm.
My Framework On Policy And Legal Formulation
I have previously provided a comprehensive framework that identifies the crucial elements that need to be considered when crafting policy and laws associated with AI for mental health. See my framework at the link here.
The framework is based on my detailed analysis of the existing state laws that entail AI for mental health. The aim is to provide a one-stop shopping experience. Those policymakers and lawmakers tasked with devising AI for mental health regulations should mindfully consider every nook and cranny of the framework. Doing so will tend to ensure that comprehensive regulatory oversight is being undertaken.
My framework consists of these twelve distinctive categories:
- (1) Scope of Regulated Activities
- (2) Licensing, Supervision, and Professional Accountability
- (3) Safety, Efficacy, and Validation Requirements
- (4) Data Privacy and Confidentiality Protections
- (5) Transparency and Disclosure Requirements
- (6) Crisis Response and Emergency Protocols
- (7) Prohibitions and Restricted Practices
- (8) Consumer Protection and Misrepresentation
- (9) Equity, Bias, and Fair Treatment
- (10) Intellectual Property, Data Rights, and Model Ownership
- (11) Cross-State and Interstate Practice
- (12) Enforcement, Compliance, and Audits
I will include those categories in this discussion about the three disparate viewpoints on regulating AI for mental health.
About The Three Mainstay Approaches
It is readily possible for a policy or law to stipulate a preferred direction within each of the twelve categories, doing so distinctly and with a purposeful aim. I have classified the aims as being grouped into three distinct camps.
The three mainstay viewpoints are:
- (a) Highly Restrictive Policy/Law: Aim to squarely tighten down on AI for mental health, establish severe restrictions and associated penalties; possibly ban such AI.
- (b) Highly Permissive Policy/Law: Encourage AI for mental health by taking a light-touch approach when it comes to restrictions, having minimal penalties; seek maximally to expand and spur this kind of AI usage.
- (c) Dual-Objective Moderation Policy/Law: Try to achieve a balance between restrictions and permissiveness in the realm of AI for mental health, encompassing suitable discouragement of adverse aspects and meanwhile encouraging the upside aspects.
The highly restrictive viewpoint wants to close off AI for mental health. On the other side of the spectrum would be the highly permissive viewpoint, which seeks to boost the use of AI for mental health. Somewhere in the middle is the dual-objective viewpoint. This is a moderate stance. It is a delicate and often challenging balance of restrictiveness and permissiveness.
An Example Via Prohibitions And Restrictions
Let’s see how this comes out in actual practice.
Consider the nature of Section 7 that entails prohibitions and restricted practices.
A highly restrictive viewpoint would list a vast array of actions and activities that AI providing mental health guidance is prohibited from undertaking. The AI cannot conduct a diagnosis of a person’s potential mental health conditions. The AI is not allowed to provide mental health recommendations, such as suggesting the use of daily meditation. On and on the list goes.
A highly permissive approach would take a nearly opposite formulation. Very few, if any, prohibitions or restricted practices would be listed in Section 7 of the policy or law. The language would either be an indication that there aren’t many constraints, or the stipulations would be silent on the matter. The omission of restrictions would be conventionally construed as the policy or law being substantively permissive (i.e., “if it doesn’t say you can’t, then you can”).
The dual-objective avenue tries to blend a semblance of restrictiveness and permissiveness. Think of this as a range. If restrictiveness is a 10, and permissiveness is a 1, the balancing perspective can exist anywhere between a 2 and a 9. Thus, the moderate viewpoint is not necessarily a 5. The balance can be skewed toward one side of the spectrum or the other.
The Restrictive Policy/Law
Across and throughout the twelve categories, a restrictive policy or law would pound away at the need to keep AI for mental health in a well-constrained box.
Here are some illustrative signals of this:
- Prohibit AI from providing anything that resembles diagnosis, therapy, or psychological treatment when it comes to the realm of mental health.
- Ban unsupervised consumer-facing mental health chatbots and require that direct oversight by licensed therapists must be included.
- Require strict pre-market approval or clinical validation for any AI used for mental health purposes.
The Permissive Policy/Law
A permissive policy or law would ensure that there are few restrictions throughout the twelve categories. Plus, the language would go further by explicitly stating that AI for mental health is broadly encouraged.
These kinds of illustrative signals would be included:
- Allow broad consumer access to AI for mental health and omit any demonstrative lists of restrictions (by default, all possibilities are generally allowed).
- Permit therapists to use AI for mental health with pre-approved clearance of bureaucratic oversight, so long as they remain responsible for their own licensure obligations.
- Encourage industry self-certification as a considered viable alternative to stipulated policy or legal provisions.
The Dual-Objective Policy/Law
In contrast to the two other viewpoints, trying to attain a dual-objective policy or law is a tricky matter. It is much easier to devise a restrictive or permissive policy than it is to do so for a dual-objective middle ground. With the two extremes, just push to one side of the spectrum, and you are pretty much where you want to be.
Not so with the dual-objectives viewpoint. Where in the middle should the policy or law ultimately land? Did it lean too much toward one end of the spectrum? How can a fair and reasonable balance be articulated?
Some potential balancing examples are along these lines:
- Defines tiers of AI for mental health (e.g., low-risk, medium-risk, high-risk), and then mainly focus on the high-risk scenarios.
- Establishes a special regulatory agency or advisory board to monitor emerging issues associated with AI for mental health and adjust the rules as AI advances.
- Requires transparency, informed consent, and human-oversight mechanisms for the high-risk tier, but avoids akin over-regulation at the low-risk and medium-risk classes.
The mantra of the dual-objective camp is to provide guardrails, not handcuffs.
Where We Are Headed
As more U.S. states decide to establish policies and laws regarding AI for mental health, the question will be whether the respective policymakers and lawmakers will have in mind to be restrictive, permissive, or take a dual-objective perspective.
That’s up to each state to decide.
The odds are that we are going to end up with quite a mixed bag. One state will go the restrictive route, while a bordering state will go the permissive route. It’s hard to necessarily predict how each state will proceed. At this time, since lawsuits are arising against AI makers for their AI mental health slip-ups, it would seem likely that policymakers and lawmakers are going to be inclined toward the restrictive side of these thorny matters.
Permissiveness is probably going to happen only if there are high-profile positives associated with AI for mental health. Suppose that research studies reveal that society is highly benefiting from AI for mental health. Imagine that heartwarming stories appear in the media about how AI for mental health aided this person or that person and saved lives. All in all, sentiment about AI for mental health could dramatically shift toward an upbeat and permissive atmosphere.
The Condition We Are In
I have stated repeatedly that we are all part of a grandiose experiment that is taking place in real-time regarding AI for mental health. Is the widespread availability of AI for mental health constructive for society? Will AI help at scale to improve our collective mental health? Some argue vehemently that the other side of the coin is more likely. Their view is that we are dangerously allowing a Wild West when it comes to the use of AI for mental health.
A vacuum exists right now concerning how AI for mental health is to be suitably devised and promulgated. Only a few states have chosen to compose and enact relevant laws. Despite their sincerity, even those laws have issues. Nearly all the states do not yet even have any such laws in their books. At this time, there isn’t a specific federal law covering AI for mental health.
Consider a final thought for now.
Oliver Wendell Holmes, Jr. famously made this remark about laws: “The life of the law has not been logic; it has been experience.” The laws that are coming down the pike about AI for mental health will be shaped by policymakers and lawmakers. They will determine the future of AI for mental health.
In so doing, they are indubitably also deciding the future of humankind’s mental health. It’s serious business and deserves serious due diligence.