In today’s column, I address the question of whether there is enough talk going on concerning the use of AI for mental health therapy. This inquiry is partially spurred by a similar question about whether there is too much talk or not enough talk about overall mental health in the public sphere.
On the AI side of things, my take is that we sensibly do need to be talking more about the use and impacts of AI for mental health, especially the rapidly expanding role of generative AI and large language models (LLMs). Furthermore, what is being said ought to be notably thoughtful and insightful, rather than confounding or misleading.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
If you are new to the topic of AI for mental health, you might want to consider reading my recent analysis of the field, which also recounts a highly innovative initiative at the Stanford University Department of Psychiatry and Behavioral Sciences called AI4MH; see the link here.
Talking About Mental Health Generally
A recent editorial in Psychiatry Online took on the nagging question of whether talking about mental health is endangering.
Why would discussing the mental health of society be problematic?
Some claim that doing so stirs more of the public to believe they have a mental health condition, versus opting not to bring up the topic of mental health per se. You see, if discussing the topic triggers people toward mental illness, there is a calculus that suggests we should remain quiet or at least low-key and not start a pebble to ultimately roar down the side of a mountain.
The opinion piece confronting the controversial topic was entitled “Are We Talking Too Much About Mental Illness?” by Daniel Morehead, Psychiatry Online, March 28, 2025, and made these salient points (excerpts):
- “According to an article in The New York Times, a growing number of researchers are suggesting that, while mental health awareness campaigns help educate the public, they can also lead people with mild symptoms to over-interpret and over-diagnose their stress. This, in turn, can lead to more distress, fear, and hesitancy about living a normal life.”
- “Mental illness is staggeringly common and destructive, and woefully undertreated. Is it really possible to talk about it too much?”
- “We are not talking too much about mental health.”
- “We are talking about mental health with too little knowledge.”
- “For the first time in human history, there is widespread recognition that mental health is important. This is an immense achievement for psychiatry and all those who support mental health. Today, the vast majority of the public does support mental health, even if they are not sure just what they are supporting. We should not squander this historically precious opportunity.”
You can plainly see that the editorial emphasized that there should be ongoing discussions about mental health, albeit with an important caveat. The caveat is that such chatter needs to be properly informed and informative. The real danger is perhaps propagating talk that misrepresents the nature of mental health and leads the public on a path that is mistaken and possibly harmful.
AI For Mental Health
Shifting gears, an allied question is whether we are talking too much about the use of AI for mental health.
How so?
If you glance at the latest news headlines, you’ll see lots of coverage associated with people falling in love with their chatbots, and others that use generative AI on a moment-by-moment basis as their AI-driven therapist. There is plenty of online and offline chatter that relates to AI in mental health.
One angle to this nonstop chatter is that it is fueling more people to make use of AI for their mental health needs. It is a cascading phenomenon that feeds upon itself. The more normalized the topic becomes, the more people are tempted to try it for themselves. Inch by inch, despite any warnings or forebodings noted in the coverage, people are essentially inspired to seek out the use of AI for mental health.
Is that a good result or a foul one?
If the expansion of this kind of AI usage is bad, the thinking is that we should stop talking about the matter. Steer clear of it. Gradually, people will forget that the matter was popular and gaining additional popularity. Like so many fads in life, the usage in this case will inevitably diminish.
Grabbing The Beast By The Horn
I tend to concur with the Psychiatry Online editorial that said it isn’t that we are talking too much about mental health, it’s that we need to ensure that what is said provides useful and practical insights. The same goes for discussing AI and mental health.
We indeed need to discuss AI for mental health, and equally, ensure that what’s being said is the right stuff. Let’s go ahead and aim to get the story straight on AI for mental health. I’ll cover a few highlights here.
First, we must acknowledge why so many people are turning toward AI as a mental health advisor. The facts are that there is an insufficient supply of mental health professionals. The societal demand for psychological guidance is readily outstripped by the available supply of therapists.
In contrast, AI is pretty much an at-scale option that doesn’t run out of availability since additional computer servers are easily racked and put into ready use.
Second, the cost of human-to-human therapy is often out of reach for those who cannot readily afford professional therapy. Affordability is a vexing consideration. There are billable hours and other fees that come into the picture. Of course, therapists abundantly deserve to get paid for their valiant work. But the costs keep rising, and all sorts of add-on fees seem to be a booming element. Also, therapy can continue nearly indefinitely. Lifelong dollar investment by a client or patient is potentially staggering.
Contemporary generative AI and LLMs are typically available for free or at an enormously low cost. AI-to-human therapy is highly affordable.
Third, interacting with human therapists can be a logistics challenge. You need to schedule a visit. Getting a time slot might require a wait. The odds are that the interaction will occur at a time more of choosing on the side of the therapist than that of the client or patient, i.e., during weekdays and conventional work hours.
Accessing AI is an anywhere and anytime proposition. You can log in at 2 a.m. on a Saturday and do not need to make a reservation or be concerned that you are waking someone up.
The Other Side Of The Coin
Those apparent advantages of leaning into AI for mental health advisement are worthy benefits, but there is a complicated price to be paid. Consider some of the concerns about using AI for mental health.
First, existing generic generative AI is not tailored to perform mental health therapy. People don’t seem to realize that the generative AI they use to help with various everyday tasks is not honed toward giving mental health advice. The AI makers do a bit of a wink-wink and often include a subtle caution about using their AI for mental health, including stating in the licensing agreement that you should not use the AI for that purpose. This is a classic and somewhat sneaky cover-their-bottoms approach.
The reality is that few people notice the wink-wink warnings, and even fewer are aware of the language buried somewhere in the online licensing of the AI. For more on this troubling aspect, see my discussion at the link here.
Second, tailored generative AI that is suitable for mental health therapy is still a work in progress. Researchers and practitioners are trying mightily to build AI for mental health that is fluent like generative AI but that is also predictable and reliable. Previously, such AI was principally composed via the use of rules or expert system capabilities (see my analysis at the link here). Those were deterministic and could be exhaustively tested. Generative AI and LLMs are inherently non-deterministic, using statistics and randomness to make them seem creative and human-like.
There are add-ons to generative AI that try to cope with the underlying issues of non-determinism. Others are opting to build AI for mental health from scratch, making AI foundational models that are shaped for AI mental health from the ground up. See my discussion at the link here.
The Triad Is At Hand
We need to find a blend between the challenges of providing human-to-human therapeutic services with the existing, though spotty, advantages of AI-to-human approaches. One such approach is what I have been referring to as the therapist-AI-client triad. This is an uprating of the traditional duo or dyad of the therapist-client relationship.
The gist is that therapists can opt to intentionally and sensibly incorporate AI into their guidance practices. By doing so, their clients or patients get the best of both worlds. Meeting with the therapist is still a human-to-human experience. Meanwhile, when the person needs help outside of the limited time with their therapist, they dive into the AI that has been carefully selected and recommended by the therapist.
For more details on the new triad of therapist-AI-client, see my in-depth discussion at the link here.
A word of caution is that if poorly managed, the therapist-AI-client triad can end up as the worst of all worlds. It goes like this. The therapist doesn’t really care about the AI aspects and just does some handwaving that the client ought to use AI. The client proceeds to wantonly make use of AI for their mental health needs. When the therapist and client get together, the bulk of their human-to-human time is spent arguing over what the AI said to do versus what the therapist is saying to do. Ultimately, the human-to-human and AI-to-human advisement gets cross-wired to the detriment of all involved.
Not good.
The crux is that a therapist-AI-client triad works only if the therapist undertakes a determined and mindful approach. Being lax or aloof won’t cut the mustard.
Keep Talking Rightly
The more eyes and ears that we have on the AI for mental health conundrum, the better.
Too few people have any clue that when they are using generic generative AI, they are voluntarily taking part in a massive, unplanned, and unfettered experiment as though they are a kind of guinea pig. We don’t know what the impacts of generic LLM use as a mental health advisor are going to do in the long term. What will the population-scale impact be?
Regrettably, only time will tell.
One ardent claim is that something is better than nothing. In other words, if people don’t have any reasonable chance at getting therapy via conventional means, by gosh, using AI is a chance for them to get a semblance of therapy, no matter how incomplete or shallow it might be right now. The allied concern is nagging problematic issues such as emitting confabulations, referred to as so-called AI hallucinations, and being an over-the-top sycophant, generic AI is potentially seeding new problems while perhaps solving other ones.
A final thought for now.
Glenn Close famously said this pointed remark about mental health: “What mental health needs is more sunlight, more candor, more unashamed conversation.” I extend the laudable tip of the hat to that rousing commentary.
Allow me to suggest, based on similar tones, that what AI for mental health needs is bright sunlight, abundant candor, and more forthright and truthful conversations on the thorny and increasingly vital direction of where this exciting use of technology is going.