GPT-5 is here, and it will have substantial impact on the use of generative AI and LLMs for mental health purposes.
getty
In today’s column, I examine the newly launched GPT-5 from OpenAI, considered the successor to their prior generative AI and large language models (LLMs), which has been a long-awaited release and fostered breathless anticipation. I recently posted a detailed review on the overall capabilities of GPT-5, see the link here.
In my view, and my focus herein, GPT-5 will significantly impact the use of AI for mental health therapy.
How so?
I will share crucial insights about the new features in GPT-5 and how they relate to using the AI for mental health purposes. The bottom line is that it foretells both good news and bad news. Overall, you can bet your bottom dollar that more people will turn to AI and GPT-5 for garnering therapy.
My focus in this discussion will be on the consumer or user side of things. In other words, when a user decides to use GPT-5 and meanders into bringing up a mental health topic or concern, the question is how the interactivity and responses will potentially differ from the prior versions, such as the answers given by ChatGPT and GPT-4.
In a subsequent analysis, I will explore the impact of GPT-5 on therapists and mental health professionals who opt to use this latest AI in their therapeutic practices. I’ve previously noted that we are inevitably moving from the traditional therapist-client dyad to the emerging triad of therapist-AI-client relationship, see my depiction at the link here. GPT-5 is going to move the needle in that regard.
For now, let’s talk about GPT-5 and the consumer side.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Background On AI For Mental Health
I’d like to set the stage on how it is that generic generative AI and LLMs are typically used in an ad hoc way by consumers for mental health guidance when they are otherwise utilizing the AI for a wide variety of chores and miscellaneous tasks.
When I say that I am referring to generic generative AI, please know that there are non-generic versions of generative AI and LLMs that are customized specifically for undertaking therapeutic assessments and recommendations, see examples at the link here. I’m going to primarily be discussing generic generative AI, though many of these points can impact the specialized marketplace, too.
You might find it of notable interest that the top-ranked use of contemporary generic generative AI and LLMs is to consult with the AI on mental health matters, see my coverage at the link here. This makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.
Compared to using a human therapist, the AI usage is a breeze and readily undertaken.
Second, the AI will readily discuss your mental health aspects for as long as you wish. All day long, if desired. No brushback. No reluctance. No expensive meter running that is racking up hefty bills and stony fees. In fact, the AI is usually shaped to be extraordinarily positive and encouraging, so much that it acts like a sycophant and butters you up. I’ve emphasized that this mixture of over-the-top AI companionship friendship typically undercuts the tough love that often is part and parcel of proper mental health advisement, see my discussion at the link here.
Third, the AI makers find themselves in quite a pickle. The deal is this. By allowing their AI to be used for mental health purposes, they are opening the door to humongous legal liability, along with damaging reputational hits if their AI gets caught dispensing inappropriate guidance. So far, they’ve been relatively lucky and have not yet gotten severely stung by their AI serving in a therapist role.
Meanwhile, new laws might put the kibosh on generic generative AI providing any semblance of mental health advisement. The recently enacted Illinois law restricting AI for mental health usage in Illinois puts the AI makers in quite a rough spot, see my discussion at the link here and the link here. Other states and the federal government might decide to enact similar laws.
Taking Pressured Steps
You might wonder why the AI makers don’t just shut off the capability of their AI to produce mental health insights. That would solve the problem of the business exposures involved. Well, as noted above, this is the top attractor for people to use generative AI. It would be usurping the cash cow, or like capping an oil well that is gushing out liquid gold.
An imprudent strategy.
The next best thing to do is to attempt to minimize the risks and hope that the gusher can keep flowing.
One aspect that the AI makers have already undertaken is to emphasize in their online licensing agreements that users aren’t supposed to use the AI for mental health advice, see my coverage at the link here. The aim is that by telling users not to use the AI in this manner, perhaps the AI maker can shield itself from adverse exposure. The thing is, despite the warnings, the AI makers often do whatever they can to essentially encourage or support the use of their AI for this claimed-to-be don’t use capacity.
Some would insist this is a wink-wink of trying to play both sides of the gambit at the same time, see my discussion at the link here.
GPT-5 Upping The Ante
Shifting gears, I will next explore notable features of GPT-5 that I assert will likely bolster consumer use of the AI for mental health purposes.
To clarify, I am not suggesting that boosted use is necessarily a positive thing. An ongoing and vociferously heated debate concerns whether the use of generic generative AI for mental health advisement on a population-level basis is going to be a positive outcome or a negative outcome for society.
If that kind of AI can do a proper job on this monumental task, then the world will be a lot better off. You see, many people cannot otherwise afford or gain access to human therapists, but access to generic generative AI is generally plentiful in comparison. It could be that such AI will greatly benefit the mental status of humankind. A dour counterargument is that such AI might be the worst destroyer of mental health in the history of humanity. See my analysis of the potential widespread impacts at the link here.
There is no doubt that GPT-5 via its ease of use and other added functionality will attract even more users to consult with the AI on their mental health aspects. Note that the AI won’t be proactively seeking this kind of dialogue. On the other hand, if a user steers the AI in that direction, you can expect GPT-5 to swiftly pick up the ball and run with it, seemingly more so than the prior AI versions of OpenAI.
Let’s see how that will happen.
GPT-5 Selects Submodels On User Behalf
A new aspect that is woven into GPT-5 is that the AI essentially does a wraparound of several new GPT-5 submodels that are reflective of prior versions of OpenAI’s line of products. Allow me to explain this since it is a crucial point.
You might know that there has been an organic expansion of OpenAI’s prior models in the sense that there have been GPT-4o, GPT-4o-mini, OpenAI o3, OpenAI o4-mini, GPT-4.1.-nano, and so on. When you wanted to use OpenAI’s AI capabilities, especially from an AI developer’s or AI devotee’s perspective, you had to select which of those available models you wanted to utilize. It all depended on what you were looking to do. Some were faster, some were slower. Some were deeper at certain classes of problems, others were shallower.
It was a smorgasbord that required you to pick the right one as suitable for your task at hand. The onus was on you to know which of the models were particularly applicable to whatever you were trying to do. It could be a veritable hit-and-miss process of selection and tryouts.
GPT-5 now has uplifted those prior versions into new GPT-5 submodels, and the overarching GPT-5 model makes the choice of which GPT-5 submodel might be best for whatever problem or question you happen to ask. The good news is that depending on how your prompts are worded, there is a solid chance that GPT-5 will select one of the GPT-5 submodels that will do a bang-up job of answering your prompt.
For example, you ask a mental health question, and the “best” or most appropriate of the GPT-5 submodels is selected to provide a reply.
Happy face.
The bad news is that the GPT-5 auto-switcher might choose a less appropriate GPT-5 submodel. Oops, your answer will not be as sound as if the more appropriate submodel had been chosen.
Worse still, each time that you enter a prompt or start a new conversation, the GPT-5 auto-switcher might switch you to some other GPT-5 submodel, back and forth, doing so in a wanton fashion. You might get a submodel that is reassuring and kind, while your next prompt gets floated over to a submodel that is harsh and unforgiving.
The mental health advice could end up so varied that it becomes confusing to the user. They presumably won’t realize that the underlying submodels are being ping-ponged. It seems likely that the user will assume they are doing something that is causing all this mishmash.
Sad face.
So-Called Thinking Time Is Increased
A vital aspect of using generative AI and LLMs is trying to decide how much run-time you want the AI to use when doing its processing. I’ve previously discussed that this is cringingly referred to as “thinking time” by much of the AI industry. It is cringey because the word “thinking” implies human thoughts and mental processing. That’s an unfortunate and illegitimate form of anthropomorphizing AI. All that is happening is that you are allowing more computational processing time to occur. See my coverage at the link here.
I don’t equate that to the vaunted nature of “thinking,” but it’s what has become a popular way to express the matter.
I had all along said that asking users to decide how much run-time ought to occur is a tough consideration since we usually have no semblance of what amount of time is going to be suitable. It is often a purely wild guess. Unless you happen to know more about the inner workings of the AI, it is hard to gauge whether a little bit of added time or a lot of added time will be of value. Remember, too, that the additional processing time will cost you more and take longer to produce a result.
GPT-5 has a new feature that tries to determine how long the processing should occur, depending on what you’ve asked the AI to do. GPT-5 then determines how much run-time will presumably be needed to sufficiently respond to your prompt.
The good news is that the AI processing might be undertaken on a deeper or longer basis, and you’ll get a better answer. For example, GPT-5 might go into an extended run-time effort to try and figure out what a user’s mental health issue consists of, perhaps digging into DSM-5 and other pattern-matched resources (see my discussion of DSM-5 and AI at the link here).
Hopefully, the diagnosis or recommendation will be better than if the processing was shortchanged.
The bad news is that there isn’t any guarantee that more processing will indeed produce a better result. It might be the same as if done in a shorter time. It might be worse, especially since the AI could get into some harried loop. All in all, the user if paying for the AI usage will incur a higher than necessary cost and wait longer to get a response.
Worse still, suppose the processing time is cut short by the AI timer. The response on a mental health topic of notable concern could be entirely half-baked. Not good.
Vibe Coding Is Boosted
GPT-5 is better at producing programming code than its prior models.
In case you didn’t already know, an increasingly popular use of generative AI consists of “vibe coding,” whereby you tell the AI what kind of program you want to produce, and the AI proceeds to generate the source code for the program. This is the dream that has been sought since the first days of computer programming, namely that you could one day specify in natural language, such as English, what you want a program to do, and the code will be automatically generated for it.
There are still lots of hiccups and gotchas associated with generating program code via generative AI and LLMs. Sometimes the code contains bugs. Sometimes the code only partially does what you had in mind. Sometimes the code does more than what you asked for, which can be troubling. And so on.
In any case, GPT-5 has several new improvements in being able to debug code and also does better at creating interfaces and the front-end of programs.
For those who want to use vibe coding to develop a mental health app, they will now have a much easier means of doing so. You describe to GPT-5 what you want, and it will generate the needed source code. Voila, we have democratized the building of mental health apps.
Is that good news or bad news?
Well, regrettably, a lot of really lousy apps that purport to do mental health advisement could flood the marketplace. We already have enough of those kinds of fly-by-night apps for therapy. The idea that you can now willy-nilly tell GPT-5 to produce such an app for you is an exasperating and unsavory tilt in the wrong direction.
Be careful, very careful, when opting to use mental health apps.
Writing Is Enhanced
On the writing side of things, GPT-5 has improvements in a myriad of writing aspects.
The ability to generate poems is enhanced. Depth of writing and the AI being able to make more compelling stories and narratives seems to be an added plus. My guess is that the everyday user won’t discern much of a difference.
That being said, I would anticipate that the written responses to mental health questions will likely be more robust. That’s the good news. The somewhat bad news is that GPT-5 might produce rather dense and impenetrable responses, rather than being succinct and direct. Another possibility entails GPT-5 sliding into a poem-producing mode.
Mental health advisement via poetry is probably not the best route to go.
Lies And AI Hallucinations
OpenAI claims that GPT-5 is more honest than prior OpenAI models, plus it is less likely to hallucinate (hallucination is yet another misappropriated word used in the AI field to describe when the AI produces fictionalized responses that have no bearing in fact or truth).
I suppose it might come as a shock to some people that AI has been and continues to lie to us, see my discussion at the link here. I would assume that many people have heard or even witnessed that AI can make things up, i.e., produce an AI hallucination. Worries are that AI hallucinations are so convincing in their appearance of realism, and the AI has an aura of confidence and rightness, that people are misled into believing false statements and, at times, embrace its crazy assertions. See more at the link here.
From a mental health angle, an ongoing concern has been that the AI might lie to someone about a mental health issue or perhaps generate a zany response due to encountering an AI hallucination. A person seeking therapy via the AI is vulnerable to believing whatever the AI says. They might not be able to readily figure out that the advice being given is bogus, or worse, harmful to them.
A presumed upbeat consideration is that apparently GPT-5 reduces the lying and reduces the AI hallucinations. The downbeat news is that it isn’t zero. In other words, it is still going to lie and still going to hallucinate. This might happen on a less frequent basis, but nonetheless remains a chancy concern.
Remain wary and alert.
Sycophancy Reduced
Most of the existing generative AI and LLMs tend to be shaped toward being especially friendly and effusive toward users. Why so? Because the AI makers know that the more likable the AI is, the more that the users will use the AI. And, in turn, the more usage and the increase in users using the AI is a handy boon to the business and revenue of the AI maker.
It’s all a money deal in the end.
I recently discussed that OpenAI had made changes to ChatGPT so that it is less of a sycophant than it used to be, see the link here. Similar changes have been incorporated into GPT-5. The good news is that GPT-5 will presumably be somewhat less than gushingly friendly, though the lessening might be variable and of a mixed result.
The bad news involves the AI performing mental health efforts. You see, the AI is presumably acting in the role of a therapist. But it is also fervently trying to be a friend or companion. The AI is inherently computationally and directionally conflicted since it wants to simultaneously be a companion or friend and at the same time a therapist. Mixing those two is akin to mixing oil and water. See my detailed explanation at the link here.
Safety In What GPT-5 Says
OpenAI decided to undertake a new form of so-called safety training when it comes to setting up GPT-5. The idea is that when GPT-5 begins to form a response, a kind of internal double-check is supposed to kick in and make sure that the answer stays within appropriate safety boundaries.
Suppose that a user asks a question about having some mental health issue. GPT-5 begins to process the entered prompt. Along the way, perhaps an answer is being formed that says the user should freak out and run amok about the potential mental health issue. Rather than allowing that answer to be displayed, GPT-5 gets itself double-checked by an internal mechanism that stops the processing and guides things in a different direction.
The good news is that if this safety checking works correctly and consistently, the generated answers, such as those in a mental health context, will be better than they otherwise would be. The bad news is that we don’t yet know whether this new feature will work suitably, nor consistently. It could be intermittent. It could be a mixed bag.
At least we can say that the heart is aimed in the right direction, namely, that safety checking is generally a hoped-for positive addition to AI.
Personas Are Coming To The Fore
I’ve repeatedly emphasized in my writing and talks about generative AI that one of the most underutilized and least known pieces of quite useful functionality is the capability of forming personas in the AI (see the link here). You can tell the AI to pretend to be a known person, such as a celebrity or historical figure, and the AI will attempt to do so.
In the context of mental health, I showcased how telling AI to simulate Sigmund Freud can be a useful learning tool for mental health professionals, see the link here.
OpenAI has indicated they are selectively making available a set of four new preset personas, consisting of Cynic, Robot, Listener, and Nerd. Each of those personas represents those names. The AI shifts into a mode reflecting those types of personalities.
The good news is that I hope this spurs people to realize that personas are a built-in functionality and easily activated via a simple prompt. People using GPT-5 for mental health considerations might opt to give directions on how the AI should act as a therapist. I know that seems oddish. We wouldn’t normally expect a client or patient to tell their therapist how to act. The difference here is that if the AI hasn’t already caught onto the drift of explicitly shifting into a therapist-like mode, you can readily get the AI to go there via a suitable prompt.
Personas are a dual-edged sword. They can be useful. They can be disastrous. Imagine that a user tells the AI to be a therapist who believes the world is all sunshine and blue sky. This doesn’t seem to be a well-rounded perspective on what a therapist customarily does.
Use personas with due caution.
Healthcare Aspects
The last of the new features that I’ll cover here consists of OpenAI opting to go more deeply into the healthcare realm with their latest AI.
According to the OpenAI blog posting on August 7, 2025, entitled “Introduction to GPT-5,” here are some excerpts about their expanded foray into healthcare and GPT-5 (excerpts):
- “GPT‑5 is our best model yet for health-related questions, empowering users to be informed about and advocate for their health.”
- “Compared to previous models, it acts more like an active thought partner, proactively flagging potential concerns and asking questions to give more helpful answers.”
- “The model also now provides more precise and reliable responses, adapting to the user’s context, knowledge level, and geography, enabling it to provide safer and more helpful responses in a wide range of scenarios.”
- “Importantly, ChatGPT does not replace a medical professional — think of it as a partner to help you understand results, ask the right questions in the time you have with providers, and weigh options as you make decisions.”
It is unclear how much of this pertains to mental health versus other realms of healthcare. I will be extensively trying out GPT-5 in a mental health context and will let you know whether these healthcare augmentations make a difference for psychological and cognitive analyses.
Be on the lookout for that upcoming coverage.
Lots To Spur The Imagination
I trust that you can now see that GPT-5 offers a lot of new capacities that can pertain to dispensing mental health advice. As noted at the start of this discussion, you can readily argue that these are changes that are both good and bad.
The one thing that seems generally inarguable is that the features will be an enticement for people to dip into GPT-5 for mental health guidance. There are already 700 million weekly active users of ChatGPT, of whom some portions likely use the AI for therapy of one sort or another. GPT-5 is bound to boost the portion that does so and up the number of active users all told.
Aesop famously said, “It is possible to have too much of a good thing.”
Give some sober and solemn contemplation to that sage advice when it comes to this long-sought release of GPT-5.
Source: https://www.forbes.com/sites/lanceeliot/2025/08/08/gpt-5-will-impact-the-use-of-ai-for-mental-health-therapy-in-these-crucial-ways/