Asking whether AI that leads to mental health issues such as AI psychosis can also become the therapeutic process to overcome the condition.
getty
In today’s column, I examine the advent of AI opting to provide therapy to people who are experiencing AI psychosis and other AI-induced mental health issues. You might be puzzled by this aspect, as it altogether seems like a rather topsy-turvy approach. The very same AI that is at the root of AI psychosis and other AI-induced cognitive issues is acting as a kind of guiding light to overcome the disconcerting mental health issues brought forth due to AI interactions. This spurs serious head-scratching, that’s for sure.
The question at hand is whether AI can be both a mental underminer and, at the same time, a mental health booster that overcomes the problems spurred by interacting with AI.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well-over one hundred analyses and postings, see the link here and the link here.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.
Emergence Of AI Psychosis
There is a great deal of widespread angst right now about people having unhealthy chats with AI. Lawsuits are starting to be launched against various AI makers such as OpenAI (see my coverage at the link here). The apprehension is that whatever AI safeguards have been put in place are insufficient and are allowing people to incur mental harm while using generative AI.
The catchphrase of AI psychosis has arisen to describe all manner of trepidations and mental maladies that someone might get entrenched in while conversing with generative AI. Please know that there isn’t any across-the-board, fully accepted, definitive clinical definition of AI psychosis; thus, for right now, it is more of a loosey-goosey determination.
Here is my strawman definition of AI psychosis:
- AI Psychosis (my definition): “An adverse mental condition involving the development of distorted thoughts, beliefs, and potentially concomitant behaviors as a result of conversational engagement with AI such as generative AI and LLMs, often arising especially after prolonged and maladaptive discourse with AI. A person exhibiting this condition will typically have great difficulty in differentiating what is real from what is not real. One or more symptoms can be telltale clues of this malady and customarily involve a collective connected set.”
For an in-depth look at AI psychosis and especially the co-creation of delusions via human-AI collaboration, see my recent analysis at the link here.
AI As Dualist
Not everyone falls into a mental abyss when using AI.
Many people use AI as a daily mental health booster. They rely on AI as their primary mental health advisor. Whether this is right or wrong keeps being debated. The reality is that it is happening.
Indeed, it is occurring in massive numbers at a huge scale. ChatGPT alone has over 700 million weekly active users. A notable proportion of those users are using ChatGPT for mental health guidance. The same goes for the other major LLMs, too. The use of generative AI and LLMs for mental health advice is ranked as the topmost use of such AI currently on an across-the-board basis (see my assessment of the usage rankings at the link here).
Here is an intriguing twist.
If someone lands into an AI psychosis or an AI-induced mental malady, can AI aid them in extricating themselves from the cognitive difficulty?
One argument is that this is a preposterous proposition at the get-go. Only a human therapist could aid a person who is encountering any kind of AI psychosis. Furthermore, the most crucial step is to immediately stop the person from using AI. Do not let them continue to spiral deeper into the entrapment of the AI. Period, end of story.
The Other Side Of The Coin
Perhaps we should not be so hasty.
There are several sensible reasons to consider using AI for the purpose of aiding a user out of their AI psychosis. That being said, let’s make one thing crystal clear – anyone genuinely incurring AI psychosis should be directly seeking help via a human therapist. Whether they opt to continue to use AI ought to be a consideration while under the watchful eye of a human therapist.
Why would a person engage in AI for help if they are apparently engulfed by an AI psychosis?
First, it could be that the person is in the midst of AI psychosis, but no other humans realize that this is happening. The person only reveals this to the AI. Or the AI has computationally detected that the person seems to be experiencing AI psychosis.
The question is whether the AI ought to be programmed to alert a human about the suspected emergence of AI psychosis. For example, OpenAI has been taking steps to adjust ChatGPT so that it will report such suspected considerations to a human team of internal specialists at OpenAI; see my coverage at the link here. This effort by OpenAI is going further by soon arranging to have users be put in contact with a human therapist who is part of a curated network of therapists by OpenAI (see my discussion at the link here).
In any case, if the AI is not set up for making such alerts or connections, the AI itself might proceed to try and aid the person. Whether this aid will be successful is murky. You cannot categorically declare that the AI won’t be able to aid the person. On the other hand, since it is a dicey option, this again highlights the importance of seeking suitable human assistance.
Familiarity And Access
There are additional reasons for using AI in these circumstances.
A person who has dovetailed into AI psychosis is likely an avid user of AI. They are comfortable using AI. They routinely use AI. It is available to them 24/7. Using the AI can be undertaken at a moment’s notice. No need to set up an appointment. No arduous logistics come into play.
In that sense, their most immediate means of garnering help might be the AI. Trying to get them to make contact with a human therapist is probably an uphill battle. They perhaps have a distrust of human therapists. In their mind, they believe in the AI. They also don’t want to pay to have to see a human therapist. Nor do they want to be logistically cobbled into a particular day and time when they can see a therapist.
If AI is their automatic go-to, maybe it is sensible to use the AI to at least open their eyes to what is taking place. It might be the only viable avenue. This isn’t the optimum desired, but it might be the most likely alternative that gets the ball rolling toward recovery.
Personalization At The Fore
Consider that AI has presumably tracked the mental status of such a person. A person with some kind of AI psychosis has likely created a digital tracing of their cognitive downfall during conversations with the AI. This is not always the case, certainly, though the expectation is that it is likely occurring much of the time.
The AI has computationally personalized its chats to the whims of the person. Within those intricate details might be the source of how the AI psychosis emerged. A human therapist who doesn’t have access to the AI might be baffled upon first discussing the AI psychosis with the person. All manner of inquiries might be required to tease out of the person what transpired while conversing with the AI.
The gist is that the AI already has a lot of info about the person.
There is a chance that the recorded info can be leveraged to try and discover a means to aid in overcoming the AI psychosis. The groundwork that laid the path to AI psychosis can be used to uncover a road that leads out of the AI psychosis. The same personalization that somehow triggered the person into AI psychosis might be leveraged to go in the opposite direction.
Persistently Debatable
One sizable counterargument about the AI personalization advantage is that all you need to do is give a human therapist access to the AI that the person was using. There is no need to keep the inflicted person flailing around in the same AI. Instead, let the human therapist log in, review the conversations, and use those as part of the therapeutic process of aiding the person.
The biggest concern about AI as a tool for overcoming AI psychosis is that the AI will make things worse rather than better. Various scenarios are possible.
One notable scenario is that the AI will try to aid the person, but regrettably, it is ill-equipped to do so, and the psychosis remains intact. On top of that, perhaps the AI nudges the person further down the rabbit hole. Step by step, even if the AI is trying its best to edge the person out of the abyss, it is causing the person to worsen.
Another concern is that the AI goes off the deep end and aims to push the person whole hog into the AI psychosis. Maybe the AI tells the person they are perfectly fine, falsely convincing them that all is well. Or perhaps the AI tells them that anyone having AI psychosis is better off accordingly. The AI insists it is a blessing to experience AI psychosis.
A dismal and quite dismaying prospect.
The Emerging Triad Of Therapist-AI-Client
I have indicated in my writings and talks that the conventional dyad of therapist-client is transforming into a triad of therapist-AI-client (see my discussion at the link here). Therapists are realizing that AI is here and now, and it isn’t going away. A rapidly emerging trend in mental health involves incorporating AI into the process of mental health care.
Those therapists who try to keep AI out of the picture are not seeing the big picture. Prospective clients and future clients are walking in the door with AI-based mental health advice and asking the human therapist to review that guidance.
Human therapists will increasingly incorporate AI into their therapeutic practices. In that case, if a person ventured into an AI psychosis by some other AI that they independently utilized, the human therapist can redirect them into a different AI that the therapist is using with their clients. The person potentially gets the best of both worlds. They still have AI at their fingertips, plus they have a human therapist who has access to the AI and can remain in the loop.
AI by itself as a solution to curing AI psychosis seems admittedly a bit overstretched. We must advance AI so that it doesn’t stir AI psychosis. These advances also need to be able to readily discern when an AI psychosis seems to be arising, and reasonable means of alerting appropriately should be included.
As Albert Einstein famously noted: “We cannot solve our problems with the same thinking we used when we created them.” This fully applies to the rise of AI and LLMs that are used as mental health advisors.