How Trump’s Executive Order On AI Might Impact AI Providing Mental Health Advice

In today’s column, I examine the social media posting by President Trump stating that he intends to adopt an Executive Order (EO) later this week that would seemingly override state-specific laws associated with regulating AI.

There has been ongoing chatter about such an EO for quite a while. Whether this time the EO will really arise is a matter we shall shortly see. A great deal of speculation entails the exact nature of the EO. It could be all-encompassing, or it might allow for numerous exceptions. Meanwhile, assorted commentary is abuzz on this.

One notable perspective that I’d like to consider is how an EO of this kind might impact the emerging realm of AI for mental health. Will the EO be good or bad for society when it comes to the use of AI as a potential mental health advisor?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

The Current Situation Legally

Some states have already opted to enact new laws governing AI that provide mental health guidance. For my analysis of the AI mental health law in Illinois, see the link here, for the law in Utah, see the link here, and for the law in Nevada, see the link here. There will be court cases that test those new laws. It is too early to know whether the laws will stand as is and survive legal battles waged by AI makers.

Congress has repeatedly waded into establishing an overarching federal law that would encompass AI that dispenses mental health advice. So far, no dice. The efforts have ultimately faded from view. Thus, at this time, there isn’t a federal law devoted to AI matters per se. I have laid out an outline of what a comprehensive law on AI and mental health ought to contain, or at least should give due consideration to, see my analyses at the link here and the link here.

The situation currently is that only a handful of states have enacted new laws regarding AI and mental health, but most states have not yet done so. Many of those states are toying with doing so. Meanwhile, Congress has also ventured into the sphere, which would be a much larger aim at AI for all kinds of uses, but nothing has gotten to a formative stage.

That’s the lay of the land right now.

Trump Post About AI EO Intentions

Into this milieu, we today had a posting on Truth Social by President Trump that said this:

  • “There must be only One Rulebook if we are going to continue to lead in AI. We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS. THERE CAN BE NO DOUBT ABOUT THIS! AI WILL BE DESTROYED IN ITS INFANCY! I will be doing a ONE RULE Executive Order this week. You can’t expect a company to get 50 Approvals every time they want to do something. THAT WILL NEVER WORK!” (December 8, 2025).

Per the posting, there is an expressed intention to publish an EO sometime this week that would override state-specific AI laws. The motivation for this is stated as seeking to avert the dilemma of having each state devise idiosyncratic laws that would then be confounding to AI makers. The posting suggests that the existing and growing morass of state laws on AI would imply that AI advances would be delayed, undercut, and possibly stymied at the get-go.

It seems judicious to speculate that the EO probably won’t speak directly to AI for mental health. I would guess that the EO covers all manners of AI uses, including, even though perhaps not explicitly named, the realm of AI and mental health. This topic would likely be simply part of an overarching umbrella of AI uses.

I am going to focus solely on the AI mental health ramifications in this discussion. If there is sufficient reader interest, I’ll do an additional posting on the wide array of AI uses that will likely be covered.

The Patchwork Claim

First, let’s unpack the idea that the states are creating a disparate set of AI laws. In the case of AI for mental health, yes, that so far does seem to be the situation.

Unless the states start to gravitate towards a kind of “standard” about regulating AI for mental health, the odds are that each state is going to proceed in whatever direction it deems worthy. I have noted that so far, the states aren’t even copying each other, namely, reusing an enacted law of another state to formulate their own version. Pretty much, each state is starting from scratch.

Ergo, there isn’t a common framework that the states are using, neither one by outright design nor one by copy-and-paste. It seems likely that this one-at-a-time, ad hoc approach is probably going to continue unabated (until or if a generalized template such as the one I’ve proposed at the link here gets traction).

The Burdensome Claim

The next factor to consider is whether the disparate state laws on AI for mental health are in some substantive way creating a burden upon AI makers. When I refer to AI makers, you probably think of the well-known large players such as OpenAI ChatGPT, Anthropic Claude, Google Gemini, Meta Llama, xAI Grok, and so on. Keep in mind that there are also lots of startups and small-sized AI makers that are jockeying for positions and eagerly trying to bring new AI innovations to the marketplace.

Is the patchwork of state laws on AI for mental health a burden to AI makers?

I will delicately say that the answer is yes, there is a burden. My basis for saying this delicately is that there is always the consideration of whether the burden is prudent versus out of balance. In other words, you could make the case that a burden might be warranted and that it is a cost associated with gaining a benefit.

Unpacking Burdens

Let’s briefly dip into some of the burdens so you can judge the merits versus shortcomings.

Some of the states that have enacted a law on AI for mental health have stated that generic AI is not to provide any mental health advice at all to the residents of that state. Period, end of story. An AI maker that resides in some other state and makes their generic AI available to all states is going to be presumably subject to that law. They would need to ask the user what state they are in, and then seemingly prevent their AI from veering into mental health aspects.

Of course, a user might lie about which state they are in or do some other trickery to get around the AI screening mechanism.

Another possibility would be for the AI maker to craft multiple versions of their AI. When a person logs into the AI, the designated state is somehow ascertained, and then the suitable version is made available for their use. This would require a lot of added work to devise separate versions, maintain them, update them, and otherwise impose a burden that would not be necessitated if all states had the same laws or if one overarching federal law prevailed.

Impacts On AI Startups

I have a bit of a twist for you to mull over.

One viewpoint is that AI startups in the AI for mental health realm would be especially undermined by these disparate state laws. You see, if they don’t incur the added cost to support the state-specific laws, they would face legal repercussions. And, whereas a big AI player might not worry about being pursued by a state and having to pay penalties for violating their law, the little players would either be crushed or not get off the ground. Investors might be worried that the legal costs are going to swamp the startup and not want to fund an army of lawyers as part of a startup package.

The other side of that coin is quite intriguing. Perhaps some AI startups would purposefully devise AI for mental health that meets the requirements of a given state. That would be part of their core competency. Rather than fighting the landscape of disparate state laws, they opt to turn it into a business value proposition. The more that the states vary, the better off they are.

Dampening Of AI Advances

We can next get to the idea that disparate state laws on AI might dampen innovation and hamper advancement in AI.

I noted early that there is generic AI that undertakes mental health aspects, and there are specialized AIs that perform mental health guidance. When you use ChatGPT, which is a generic AI, it just happens to also provide mental health guidance. This is not its primary goal (though people are using it extensively and widely for that functionality).

Some of the state laws on AI for mental health are opting to ban both the generic AI and the specialized AI instances, entailing mental health guidance. That is a bridge too far for some proponents who favor using bona fide AI-based mental health support. They worry that this type of blanket ban will suppress researchers and therapists from developing and adopting AI in mental health support capacities.

That would be a hard blow.

The upside of specialized AI for mental health is upbeat. If specialized AI for mental health is done well and used as part of a therapeutic aid by a therapist, the potential boon for coping with societal mental health at scale is immense. Currently, there aren’t enough therapists to handle the burgeoning need for mental health care. Therapists who opt to smartly lean into AI as an add-on to their practice can do so more and aid many more clients than they could otherwise. I have predicted that we are transforming from the traditional dyad of therapist-client and rapidly shifting toward a new triad of therapist-AI-client, see my analyses at the link here and the link here.

State laws that seek to squash specialized AI for mental health are perhaps unknowingly trying to do too much in the spirit of wanting to curtail adverse impacts from AI. They are reaching beyond their skis. It would be one thing to establish limits and controls, but another that blindly cast all such AI asunder.

Pressures For Federal Law

Suppose that the EO indicates overall that the state laws on AI for mental health are considered no longer viable. You can certainly assume that the states will indubitably fight back and take the matter to court. It could be years for the questions at hand to be determined by the courts.

I would anticipate that some states might decide to rush ahead with preparing their own new laws on AI for mental health. Why would they do so? It doesn’t seem to make sense, considering the promulgation of an EO that says the state laws won’t prevail. One reason might be to hope that a kind of grandfather clause is later established, and those states that had such a law will continue to keep it (i.e., only the states that didn’t have a law at the time are in the lurch). Another might be to instigate a legal battle over states’ rights. Yet another is that a future president opts to rescind the EO.

Lots of reasons are possible.

Will the EO help or hinder the Congressional path toward laws on AI for mental health?

You could argue that the EO will create intense pressure on Congress to pass something. This is logical because the states won’t presumably have an individual say in the matter of AI and mental health anymore, and there will be a ruckus to put something overarching in place. Whether the overarching law ends up pleasing states or irking them is going to be a battle royale. You can bet money on that.

Existing Laws Still Exist

One of the loudest arguments about establishing new laws associated with AI is that some would ardently insist that existing non-AI laws are generally sufficient to handle AI circumstances. The asserted bottom line is that we don’t need to keep adding AI-specific laws. The perspective is that AI is already covered by enough generalized laws. Aiming to add new laws under the guise of AI is a false pretense, they would exhort. For my coverage on that controversial contention, see the link here.

A fact that is incontrovertible is that we are now amid a grandiose experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is purported to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

Will we be better off by having access to such AI, or might the AI be shaping mental health at scale in a manner that we will later regret? Policy and laws will have a demonstrative role in determining that future.

I am reminded of the famous line by the acclaimed French economist of the 1800s, Frederic Bastiat: “When law and morality contradict each other, the citizen has the cruel alternative of either losing his moral sense or losing his respect for the law.”

Source: https://www.forbes.com/sites/lanceeliot/2025/12/08/how-trumps-executive-order-on-ai-might-impact-ai-providing-mental-health-advice/