Latest AI Group Chat Features Like Those In ChatGPT Are Fundamentally Changing Mental Health Therapy

In today’s column, I examine a new advancement in generative AI and large language models (LLMs) that enables the use of group chats. A group chat is when you and other invited participants engage in an online dialogue while logged into AI. The twist is that the AI is also an active participant.

One innovative way to leverage an LLM-based group chat involves doing so as a mental health therapy session. Here’s the deal. You and your human therapist log into an AI that provides a group chat capability. Your therapist conducts the therapy session as they might normally do so. Meanwhile, the LLM is quietly paying attention. When needed or if called upon, the AI will overtly engage in the chat.

Is this a good idea or a questionable practice?

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 800 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice.

Banner headlines in August of this year accompanied a lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement. Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm.

For the details of the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Individual Chats Being Upgraded

Shifting gears, let’s discuss the latest advancement in LLMs that consists of enabling group chats.

First, as you know, the typical approach to using generative AI is that an individual logs into the AI and carries on a dialogue with no other human involved. It is just you and the AI. Period, end of story. This is how most people use AI for their mental health guidance. A person logs into their favored LLM and engages in a dialogue about their mental health. Only you and the AI are conversing.

Well, times are changing.

The latest advances in LLMs now allow multiple people to be logged into an AI-based dialogue. It goes this way. Someone starts the dialogue. They invite others to join the dialogue. They can decide who to let in and who to deny entry. These are all the customary actions that you can do with Zoom or any kind of group meeting capability.

What makes this special is that the AI is also a participant in the group chat.

You can tell the AI to stay quiet and not actively participate. Or you can instruct the AI to be an active participant. Furthermore, you can change your preference throughout the dialogue. One moment, you bring the AI into the discussion, the next moment, you decide to command the AI to fall back into the background. Generally, the AI is always paying attention during the session and keeps up with whatever the dialogue consists of.

ChatGPT Has Group Chats

Recently, OpenAI decided to add a group chat capability to ChatGPT.

People are only gradually becoming aware of the new feature. It is still too early to gauge how much people will use it and whether they will embrace group chats in the presence of the AI. That being said, I am confidently anticipating that all the major LLMs will soon have a similar group chat capability. This will become the norm. Any AI that doesn’t provide a group chat feature will be perceived as less capable, and people will likely gravitate towards the ones that do have it.

One might argue that group chat features will become table stakes. AI makers will have to add the feature to their LLMs.

In an OpenAI blog posting on November 13, 2025, entitled “Introducing group chats in ChatGPT”, the new capability was introduced and identified various salient nuances about how the group chat works (excerpts):

  • “Group chats make it possible to bring people, and ChatGPT, into the same conversation.”
  • “For example, if you’re planning a weekend trip with friends, create a group chat so ChatGPT can help compare destinations, build an itinerary, and create a packing list with everyone participating and following along.”
  • “ChatGPT follows the flow of the conversation and decides when to respond and when to stay quiet based on the context of the group conversation. You can always mention ‘ChatGPT’ in a message when you want it to respond.”
  • “You can set custom instructions for how ChatGPT responds in each group chat, whether that’s sharing more context or giving a specific tone or personality.”
  • “Your personal ChatGPT memory is not used in group chats, and ChatGPT does not create new memories from these conversations. We’re exploring offering more granular controls in the future so you can choose if and how ChatGPT uses memory with group chats.”

Using AI-Enabled Group Chats

As per the points noted above, there are lots of ways that people will make use of these new group chat features.

An example of planning a weekend trip is a handy illustration of the value inherent in AI-based group chats. The human participants can discuss what they want to do. The AI will be paying attention. When the humans want the AI to take an action, such as booking flights and hotels, voila, the LLM will do as asked. Nice.

Not everyone will necessarily be elated with having AI as a participant.

I suppose it might seem eerie, almost akin to a Big Brother sci-fi nightmare scenario. The AI is watching your every word. What will the AI do with the discussion? Can the AI tattle on you? What if you say something untoward? The whole scheme perhaps seems unsavory.

On the other hand, you must admit that having the AI as a participant can be quite advantageous. Assignments can be given to the AI. The AI is already up-to-par on the context of the situation. If you had to separately go access the AI, it wouldn’t be instantly up-to-speed on what is taking place. The group chat seems highly convenient and altogether beneficial.

Specialized Uses Such As For Mental Health

AI-enabled group chats will ultimately find all sorts of specialized circumstances that will become relatively popular. One such possibility has to do with mental health.

I’ve previously identified that the classic dyad of therapist-client is going to be disrupted and transformed into a triad, namely the therapist-AI-client relationship. AI will be a crucial component of the therapy process. Therapists will include AI as a capability that jointly undertakes the mental health care of their clients and patients.

For my in-depth analysis and predictions about the therapist-AI-client approach, see the link here and the link here.

A group chat feature in AI will further reinforce and accelerate the therapist-AI-client avenue. Right now, it is logistically awkward for a therapist and client to both be in generative AI at the same time. The usual means is to have both the therapist and client log into the AI and pretend to be the same person (one login name, one login password).

The AI doesn’t especially catch onto the fact that two people are carrying on a dialogue. It takes a bit of extra work by the therapist and the client to clue the AI accordingly. All in all, you don’t see many attempts to have the therapist and client be in the AI simultaneously. Instead, the client uses the AI, and later, the therapist checks into the AI to see what transpired.

It’s an old-time batch-oriented one-at-a-time approach.

The New Way

By using an AI-enabled group chat, the therapist and client can directly include AI in the therapy activity.

At the start of the conversation, it is important to make sure that both the therapist and the client are aware that AI is being included. The client should not try to somehow trick the therapist and sneak the AI into the dialogue, nor should the therapist fail to properly notify the client about the presence of the AI.

A therapist will need to decide the degree of AI intervention that they want to have undertaken during the conversation. The AI could be entirely passive and only “listening” to the conversation the entire time. This would be handy as a means of later getting the AI to summarize what took place.

The other extreme would be to tell the AI to intervene at its discretion. The AI might be quiet at times. At other moments, the AI would interject. Perhaps the AI detects that the client seems to be missing the point that the therapist has just made. The AI could actively enter the discussion and clarify what the therapist has said. The AI might even start asking the client various pointed questions to see that they understand the drift involved. And so on.

Options for the therapy session can be logically arranged into these four graduated cases:

  • (1) AI is turned off. You tell the AI not to pay attention to all or a portion of the conversation as it ensues.
  • (2) AI is enabled and silent. You tell the AI to pay attention but remain quiet and do not interject into the dialogue.
  • (3) AI is a constrained active participant. You tell the AI the constraints associated with when the AI can interject, laying out the dos and don’ts.
  • (4) AI is a freely active participant. You tell the AI to go ahead and interact as it might opt to do so, allowing an anything-goes participation.

Let’s consider some of the intriguing and useful possibilities.

Psychoeducation On-The-Spot

One notable use of AI in a mental health group-chat context is to have the LLM explain psychological terminology or concepts.

Imagine that the therapist and client are dialoguing, and all of a sudden, the therapist mentions the acronym of PTSD. Perhaps the client doesn’t know what PTSD refers to or has preconceived notions that aren’t rooted in proper science.

The AI could interject and offer an explanation about PTSD. I’m sure that you are thinking that the therapist could have done likewise. Perhaps the therapist is so accustomed to discussing PTSD that it doesn’t occur to them that the client might not know what it is.

In addition, the therapist saves their effort of having to lay out the particulars of the psychoeducational topic at hand. The AI will likely have a handy means of describing psychological terminology and concepts. This is bound to be customized to the client, in the sense that the AI is paying attention to the vocabulary and aspects the client has been mentioning. The AI would tend to ratchet up or down the explanation based on the semblance of what the client has already expressed during the dialogue.

The therapist could then simply amplify what the AI has conveyed or make corrections if needed. It is important to realize that the therapist could readily redirect the AI, too. Thus, if the AI is waning and waxing, the therapist could tell the AI to shorten the explanation.

Helping A Client With Articulating Their Thoughts

Another handy use of the AI would be to aid a client who seems to be struggling with composing their inner thoughts.

Here’s a demonstrative scenario. A therapist and a client are interacting about how the client has been depressed lately. The therapist asks the client to describe the situation in which this is occurring. The client clams up. They want to say something, but they aren’t sure how to word it.

The AI detects that the client is freezing up during the dialogue. At that juncture, assuming the AI has been permitted to engage, the AI might suggest phrasing such as “Are you trying to say something like this…”. Maybe the AI provides examples. This spurs the person to be more open and comfortable in describing their thoughts.

Once again, you might be thinking that this is usurping the job of the therapist. Shouldn’t the therapist be the one who is guiding the client toward describing their thoughts? Why has the AI done so?

This brings up a new angle on the transforming process of therapy. The question arises as to when to best utilize the third party that is now being included in therapy, consisting of the AI component. Maybe the client isn’t responsive to the therapist at that moment, but feels more comfortable when prodded by the AI. Great, lean into the AI for that instant.

Does this mean the therapist is bad at their job?

Nope. The AI becomes another tool in the toolkit of the therapist. A therapist versed in the leveraging of AI realizes that the AI element can be a boost to how therapy is undertaken. For more about best practices for therapists in the role of using AI during therapy, see my discussion at the link here.

It’s A Stew That Requires Proper Mixing

There are many more beneficial uses of AI in a mental health group chat setting. A therapist might ask a partner or family member of the client to also participate in the group chat. This becomes a form of group therapy. It is group therapy that is facilitated via an AI-enabled group chat capability.

Consider yet another angle. Suppose that the therapist and client are of different cultures. The client responds to the therapist but relies on a cultural reference only known to those versed in their culture. The AI interjects and provides a cultural explanation on the fly. The same applies to any natural language differences. The therapist says something in one language that is less familiar to the client. The AI translates instantly.

I don’t want to leave an impression with you that a group chat feature is a cure-all. It’s not. There are plenty of downsides.

Let’s explore those.

Noting The Key Downsides

First, the therapist and the client are now potentially distracted from their dialogue by having the AI interject from time to time. The AI might disrupt the flow of the conversation. The AI might make a remark that is off-putting. The AI could provide a mental health statement that is false, forcing the therapist to shift attention to the AI and tell the client to ignore what the AI has stated.

Second, the AI must be managed. This requires added effort. Will the therapist be doing the management of the AI? What if the client tries to do so? The use of AI creates a potential overhead that isn’t part of the therapy per se.

Third, over-reliance on AI might arise. A therapist could inadvertently allow the AI to drive the car, so to speak. Rather than keeping their mind on the matter, the therapist starts to allow the AI to do some or maybe a lot of the work during the discourse. I’ve already warned that there is a chance that therapists might become deskilled by an over-reliance on AI as an aid during the therapy process, see the link here.

Fourth, there are privacy concerns and issues associated with the therapist-client confidentiality. Most of the major LLMs have online licensing agreements that stipulate the AI maker can inspect your prompts, they can use the entered data for further training of their AI, and so on. Bottom line is that you aren’t necessarily guaranteed privacy, and this could raise exposures concerning therapist-client confidentiality. The use of customized mental health therapy AI-based apps tries to overcome those issues by storing data in an encrypted manner and offering other privacy-oriented capacities (see my coverage at the link here).

AI Is Here To Stay

There is no doubt that AI is here to stay. The use of AI in mental health is going to expand. AI will get deeper and broader across all facets of human mental health. You can bet your bottom dollar on that assertion.

A final thought for now.

The famed French novelist Marcel Proust made this remark: “The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.” In a sense, you could claim that AI is adding new eyes to the nature of therapy and especially in the case of the therapist and client relationship. Therapists are on a voyage of new discovery, and so is the public at large.

May the wind be at our backs and good fortune be found ahead.

Source: https://www.forbes.com/sites/lanceeliot/2025/12/01/latest-ai-group-chat-features-like-those-in-chatgpt-are-fundamentally-changing-mental-health-therapy/