In today’s column, I am continuing and extending my ongoing coverage of the use of generative AI for mental health advisement, see my prior discussions and analyses on this topic at the link here and the link here, just to name a few. Readers have asked me to clue them in as to the latest trends in this particular realm. I will be answering questions that have come my way and will try to proffer insights on a wide variety of new and emerging elements pertinent to this rapidly evolving field.
Let’s jump right in.
In case you didn’t already know, the use of generative AI for performing mental health tasks is an extraordinarily heated and controversial topic.
Here’s why.
Some believe that only a human therapist or clinician can suitably aid other humans with their mental health concerns. The idea of using a passionless human-soul-devoid AI system or a so-called robot therapist or robo-therapist (which nowadays is said to be a chatbot or generative AI therapist), just seems nutty and completely off the rails. Human-to-human interaction is presumed to be the only way to cope with mental health considerations.
There is another side to that coin.
A reply or retort is that AI and especially the latest in generative AI can do wonders when it comes to aiding a human that is seeking mental health insights. A person can use the generative AI at any time of the day or night since the AI is available 24×7. No need to somehow hold onto your mental anguish or angst until you can get access to your needed therapist. Furthermore, the cost is likely going to be a lot less to use a generative AI mental health advisor rather than a human one. This opens up the possibility of gaining access to mental health advisement that would otherwise be out of the reach of those in this world who cannot afford such helpful aid.
In an article published in Psychology Today, eye-opening stats were identified on how widespread mental health issues today are in our society:
- “Today, 21% of US adults reported experiencing a mental illness, and one in ten youth report mental illness severely impacting their life. Yet, only one mental healthcare professional currently exists for every 350 people. Trained on clinical data, generative AI could aid in psychiatric diagnosis, medication management, and psychotherapy. The technology could act as a patient-facing chatbot or back-end assistant that provides the physician with insights garnered from its large language model (LLM) processing capabilities.” (source is entitled “Generative AI Could Help Solve the U.S. Mental Health Crisis” by author Ashley Andreou, Psychology Today, March 9, 2023).
The gist though is that we can civilly agree that there is a mental health challenge facing the country and that we ought to be doing something about it. If we do nothing, the base assumption is that things are going to get worse. You can’t let a festering problem endlessly fester.
You might have noticed in the aforementioned stats that there is a claimed paucity of available qualified mental health professionals. The belief is that there is an imbalance in supply and demand, for which there is an insufficient supply of mental health advisers and an overabundance of either actual or latent demand for mental health advice (I say latent in the sense that many might not realize the value of seeking mental health advice, or they cannot afford it, or they cannot logistically access it).
Into the void steps the use of generative AI.
Another more formalized sounding phrase is to call this as consisting of digital mental health interventions (DMHI). That’s a broader term. Allow me to elaborate. For many years there has been software used in some fashion or another when it comes to mental health activity. A therapist might use the computer to keep track digitally of their notes about a patient. Those notes might be scanned by a computer algorithm to see if any notable patterns can be found. Lots of those kinds of uses have existed and continue to exist today.
The big difference in our new world of pervasive generative AI is that we are now having people leaning into generative AI for mental health advisement without necessarily any human therapist in the picture. Step by step, people are getting further and further away from the human-in-the-loop advisory on mental health considerations. Whether we like it or not, the fluency of generative AI and ease of access is turning the tide toward AI-only focused mental health advisement.
Another reason to do so is that people at times feel more at ease conferring with AI than they do with a fellow human being, as noted in this posted remark: “Even for less structured therapies, some data suggest that people will share more with a bot than a human therapist. It relieves concerns that they are being judged or need to please the human therapist. And for a generation of digital natives, the appeal of a human therapist may not be the same as it was for their parents and grandparents” (source is entitled “Generative AI and Mental Health”, June 2023, Microsoft AI Anthology online, Tom Insel, M.D.).
The same logic as to preferring generative AI or companion AI over a breathing human interaction includes these salient points: “First, consumers may not want to associate themselves with stigma around mental health. Second, they may not be able to afford professional therapy or may have had negative experiences of mental health providers or psychotherapeutic treatment options. Third, they may face barriers to accessing therapy. Fourth, they may not recognize they have a mental health problem in the first place. Finally, the use of companion AIs by individuals with mental health issues is facilitated by the ease with which consumers may anthropomorphize and ascribe mental states to them” (source is Working Paper 23-011, “Chatbots and Mental Health: Insights into the Safety of Generative AI”, Julian De Freitas, Ahmet Kaan Uğuralp, Zeliha Uğuralp, Stefano Puntoni, Harvard Business School).
Given all those reasons to pursue the advent of generative AI for mental health, what reasons suggest that doing so might be detrimental?
There are lots of concerns and legitimate downsides.
First, we don’t know whether this use of generative AI is safe. A person might opt to use generative AI and get wacky outputs due to an error or an AI hallucination (that’s a common phrase these days, which I don’t like, and misleading anthropomorphizes that the AI computationally made up something, see my discussion at the link here). If the person and the generative AI are the only ones conversing, there is not necessarily a means to realize that the person has been given bad advice. The person might act on advice that leads to their own self-harm. Bad news.
Second, we don’t know whether the person will incur improvements in their mental health as a result of conversing with the generative AI. An argument is made that we need to do more empirical studies of how people react to generative AI that is giving mental health advice. One possibility is that the AI leads the person astray (as mentioned in my first point above). Another is that the AI provides no substantive benefit to the person. In that sense, they are potentially wasting time (and money) on the generative AI that would be better expended with a human therapist.
Third, a person might become dependent upon the generative AI and go into a mole mode whereby they no longer particularly interact with humans. They become fully engaged or immersed with the AI and forsake dealing with humans. The generative AI becomes akin to an addictive drug, see my discussion at the link here. When I bring up this point, I usually mention the (spoiler alert) use of Wilson in the famous movie Cast Away that starred Tom Hanks (he becomes fixated on an inanimate object as though it is a living being).
Fourth, right now, there aren’t many rules or regulations strictly governing the use of generative AI for mental health advisement purposes. The lack of “soft law” such as AI Ethics and “hard law” such as AI laws on the books about this expanding area is making some deservedly queasy about a Wild West when it comes to such uses of AI. A counterviewpoint is that if we try to clamp down on this usage, or do so prematurely, we will undercut the innovation and benefits that will accrue from this utilization.
All in all, the general view seems to be expressed in this quote: “Therapy apps are incorporating AI programs such as ChatGPT. But such programs could provide unvetted or harmful feedback if they’re not well regulated” (source is an article entitled “AI Chatbots Could Help Provide Therapy, but Caution Is Needed”, Sara Reardon, June 14, 2023, Scientific American).
Should we allow more time to play out and see how things shake out?
Or should we speedily put in speed bumps and other precautions, doing so before too much of the horse is out of the barn?
A true conundrum.
I’ve now provided you with a quick foundation on this topic to loop you into the arising matter overall. I will next shift into a rapid-fire mode of sharing with you a variety of issues, problems, concerns, opportunities, challenges, and the like that I have been asked about or have come up with on this vexing topic.
Encounters Of The First Or Third Kind
I’ll get started with a big-picture perspective.
Consider the range in which people might encounter generative AI for mental health purposes:
- Intentional use of generic generative AI for mental health advisement. People who knowingly use generic generative AI for mental health advisement and guidance purposes.
- Inadvertent mental health advisement usage of generic generative AI. People who are using generic generative AI and have haphazardly perchance landed into mental health uses by happenstance.
- Use of a purported mental health app that is silently connected with generative AI. A person signs up to use a mental health app that turns out to have generative AI in the back-end and for which the person is unaware that generative AI is being used.
- Purported mental health app that touts it is generative AI-based or driven. People are attracted to using a mental health app due to the claim that it uses the latest and greatest in generative AI (which might be a valid claim or mainly malarky).
- Twist — Reversal role when generative AI might secretly be impacting mental health. Mental health repercussions can presumably arise via the use of generative AI though people don’t realize what is happening to them (I’ll explain this, momentarily below).
- Etc.
Let’s briefly unpack those points.
Some people might decide that they wish to have mental health guidance and that one means to do so would be via using generative AI. They are knowingly seeking out generative AI for that purpose. They might not fully realize the ramifications of their path, but they at least are cognizant of their intentions.
Meanwhile, there are some number of people that upon using generative AI might fall into the use of the AI for mental health advisement. This wasn’t on their bucket list. They did not seek to use the generative AI in hopes of getting mental health assistance. Maybe they were using the generative AI to write their memos at work and to their surprise the generative AI out-of-the-blue mentioned to them that they seem to be overly stressed out. The next thing you know, they begin to engage in a dialogue as though they are getting mental health advisement.
That is the slippery slope angle.
Another possibility is that someone opts to use a purported mental health app, doing so fundamentally to get mental health advisement. The person wasn’t thinking about generative AI and maybe doesn’t even know what generative AI is. In any case, suppose that the mental health app is relying upon generative AI as a back-end tool. Whether the person realizes it or not, they are now using generative AI.
As an aside, we will rapidly be witnessing the infusing of generative AI into mental health apps. The competitive juices of mental health app makers who want to make their apps stand out in comparison to the increasingly crowded marketplace for such apps are prodding this trend. The app maker might tout to the high heavens that they are using generative AI. It makes basic sense to do so. The idea of hiding or being silent about something as hot as generative AI would seem businesswise foolhardy. Eventually, the inclusion of generative AI will become the norm. At that juncture, competition will shift to nuances of the generative AI, such as which versions are better at doing mental health advisement than others.
All of those forms of using generative AI for mental health are pretty much apparent and somewhat out in the open.
Consider a more subliminal concern.
A notable twist is that the very use of generative AI might itself be construed as a mental health impactful activity. Any use. No matter how you are using generative AI. The assertion here is that even just asking generative AI the blandest of questions or doing anything in generative AI is going to have some form of mental health impact on those people using the AI.
If that last possibility is valid, it implies that all those hundreds of millions or more of people presumably using generative AI today are all part of a grand experiment. We are all guinea pigs. We have been provided with generative AI and are blissfully using it without realizing that our mental health is being impacted.
Scary?
Unsettling?
Well, if you judge that the impact is undercutting or ruining our mental health, you would say that this is an atrocious situation (imagine, too, the lawsuits down the road if that turns out to be the case, whoa, say goodbye to those AI makers). Big sad face. A smiley face perspective is that maybe generative AI is enhancing mental health. The claim is that the use of generative AI will automatically boost your mental health, regardless of how you use the AI. That is admittedly a strikingly optimistic viewpoint.
Time will tell how things ultimately pan out.
Going Generic Versus Specialized
When you use generative AI such as ChatGPT, Bard, Claude 2, GPT-4, and so on, you are essentially using what I coin as generic generative AI. I say this because the generative AI was initially data-trained on information scanned across the breadth of the Internet and has a broad semblance of computational pattern-matching about what humans have expressed in writing. You might suggest that the generative AI is a jack-of-all-trades and not a specialist per se on any particular topic or subject matter.
There are ways to push generative AI into a specific domain. You can feed additional info that covers say the details of our laws and thus aim to have a legal-focused generative AI (something I’ve done and have discussed, see my coverage at the link here). You can start from scratch and build a generative AI that is honed toward a given domain. This might be done in the medical domain if you want generative AI that is specific to a particular medical specialty. And so on.
Here is where I am taking you.
Consider these two categories related to mental health and generative AI:
- (1) Generic generative AI for mental health advisement. This consists of everyday generic generative AI that is asked to provide mental health advisement and contains no special provisions for doing so.
- (2) Customized or tailored generative AI for mental health advisement. This consists of generative AI that has been specially set up or otherwise data-trained for doing mental health advisement.
I would argue that we are mainly in the first category right now. People approach generic generative AI and try to get it to aid them for mental health purposes. The generic generative AI relies on a wide swath of overall data training and doesn’t necessarily have much in-depth capability in this realm. It is a generalist tool that is computationally pontificating about mental health.
We will gradually see efforts to deepen the generative AI for mental health advisement purposes. On the one hand, this might provide a greater hope that the generative AI can do so to a higher level of confidence. At the same time, you’ve got to assume that the potential liability for the AI maker or AI tuner is going to rise too. They are putting out there that their generative AI is tailored or customized for the mental health advisement task.
Accountability is coming soon.
Biggie Accountability Looming
Speaking of accountability, are AI makers that provide generative AI already sitting atop a dangerous hill by allowing their generic generative AI to be used for mental health purposes?
I’ve covered previously that most people using generative AI are failing to read and abide by the software license agreements, see my analysis and warning at the link here. In that coverage, I examine closely the licensing usage policies of OpenAI, as an exemplar, which are the rules you agree to when you opt to use ChatGPT, GPT-4, and other OpenAI systems. Most of the other AI makers have similar licensing stipulations.
Take a look at these noted conditions (via the OpenAI Usage Policies posted online):
- “Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition.”
- “OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.”
- “OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.”
- “Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.”
- “Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.”
You might interpret those provisions to pertain to using generative AI for mental health purposes (well, that depends heavily on one’s interpretation).
If someone uses the generative AI for mental health advisement, presumably they are violating those licensing terms, though there is a lot of legal wrangling that can occur about whether the wording plainly covers that or not. Would that get the AI maker off the hook if they are at some point sued for having supposedly provided their generative AI for mental health advisement?
One claim would be that the person using the generative AI violated the stated stipulations. Thus, the burden of what they did is on their shoulders. Period, end of story.
A counterargument would be that the stipulations were fuzzy and did not provide adequate warning, and if the rules aren’t being diligently enforced then the further argument would be that the rules are hollow and have no substantive enactment or enforcement.
Another angle would be whether the generative AI was devised to directly warn people when they started into a mental health-related conversation. An AI maker can have the generative AI detect that a mental health interaction seems at play. This then might be either stopped in its tracks or at least an alert given to the person using the generative AI to desist from doing so.
A dual-edged sword goes with this. If the AI maker doesn’t have those warning or detection provisions, they might contend that the use of their generative AI for mental health purposes was not on their radar. They can try the “we didn’t know” defense. Meanwhile, if they did do something explicit, it could be argued that they knew their generative AI was being used for this purpose and did not take enough or sufficient steps to protect consumers or users of the AI. The ball bounces in many directions.
Lawyers will have a heyday on this.
For my predictions about what is coming for lawyers, law firms, judges, and society overall as we see more and more AI arising and encounter a plethora of thorny legal issues, see the link here and the link here, for example.
Privacy And Confidentiality Are Out The Window
You decide to use generative AI for mental health advisement.
As you sit there for hours, you are pouring out your heart. All manner of tales about your childhood are conveyed to the generative AI. It is perhaps the most intimate reveal of your personal demons that you have ever recounted. You would never tell this to another human. In your mind, only you and the computer know what you’ve said.
Oopsie!
As I’ve covered previously and repeatedly in my column, see the link here for example, most of the AI makers state in their licensing agreements that you have no guarantee of privacy or confidentiality. They reserve the right to examine whatever you have entered into the generative AI. Their human AI developers and testers might see what you’ve provided as prompts.
Plus, the other strident possibility is that they will use your prompts to do further tuning to their generative AI. Your heartfelt words will become fodder for computational pattern-matching. It is conceivable that your actual words might appear at some later point in other users’ sessions, though it is unlikely to name you. That being said, there is a chance that some of your personal identifying info might linger in the pattern-matching and a reveal of your personal identity could occur (low odds, but still possible).
I assess that most people using generative AI are not aware of this lack of privacy and confidentially. It is easy to overlook. Trust is mistakenly accrued over repeated usage. For example, you use generative AI to write an essay about Abraham Lincoln. The essay is really good. You use the generative AI for a variety of other tasks. You increasingly build trust in the AI. When it comes to pouring out your heart, this trust misleads you into believing that you have a secret pact with the generative AI.
Don’t fall into that trap.
Autonomous And Semi-Autonomous Driving Of Mental Health
A matter of intense debate concerning mental health advisement involves the generative AI acting on its own versus acting in concert with a human therapist.
Let’s go ahead and lay this out as two major distinctions:
- (1) Autonomous — Generative AI mental health advisement by AI-only. Generative AI that is providing mental health advisement and doing so without any morsel of human-therapist collaboration.
- (2) Semi-autonomous — Generative AI mental health advisement coupled with therapist collaboration. Generative AI provides mental health advisement in conjunction with a human therapist fully or partially in the loop.
I liken this crudely to self-driving cars, another topic that I’ve covered extensively, see the link here.
There are self-driving cars that are truly self-driving in the sense that the AI autonomously controls and drives the autonomous vehicle (known as Level 4 and Level 5, see the link here). Today’s cars are pretty much semi-autonomous in the way they are driven (usually Level 2 and Level 3). A human driver must be present in the driver’s seat. The control and driving of the car vary from the human doing very little to the human having to totally take on the driving task.
The use of generative AI can be typified as being autonomously used for mental health purposes, wherein the person using the AI is doing so without any human therapist in the loop. The contrasting variation is when a human therapist is in the loop and advising the person (a semi-autonomous form of usage), in addition to the use of the generative AI doing so.
Some insist that we ought to team up this kind of generative AI with mental healthcare professionals all of the time, working collaboratively (this is already being done at times by using mental health apps that don’t have AI). A mental healthcare professional meets with and interacts with a client or patient, and then potentially encourages them to use a mental health app that could further assist. The app might have internal tracking that can be provided to the human therapist. The app is available 24×7, and the human therapist is routinely kept informed by the computer system, along with the human therapist meeting face-to-face or online remotely with the person as needed and when available.
The human in the loop seems a lot more reassuring. If the generative AI goes astray, presumably the human therapist will be able to set things straight with the patient. We might be able to get the best of both worlds, as it were.
Do not falsely assume that this is a risk-free trouble-free approach.
Consider these points made by the American Psychiatric Association:
- “Given the regulatory grey area, expansive data use practices of many platforms, and lack of evidence base currently surrounding many AI applications in healthcare, clinicians need to be especially cautious about using AI-driven tools when making decisions, entering any patient data into AI systems, or recommending AI-driven technologies as treatments.”
- “Overall, physicians should approach AI technologies with caution, particularly being aware of potential biases or inaccuracies; ensure that they are continuing to comply with HIPAA in all uses of AI in their practices; and take an active role in oversight of AI-driven clinical decision support, viewing AI as a tool intended to augment rather than replace clinical decision-making“ (source is the online posting entitled “The Basics of Augmented Intelligence: Some Factors Psychiatrists Need to Know Now”, American Psychiatric Association, June 29, 2023).
Likewise, here is what the American Psychological Association has to say:
- “In psychology practice, artificial intelligence (AI) chatbots can make therapy more accessible and less expensive. AI tools can also improve interventions, automate administrative tasks, and aid in training new clinicians. On the research side, synthetic intelligence is offering new ways to understand human intelligence, while machine learning allows researchers to glean insights from massive quantities of data. Meanwhile, educators are exploring ways to leverage ChatGPT in the classroom.”
- “Psychology practice is ripe for AI innovations—including therapeutic chatbots, tools that automate notetaking and other administrative tasks, and more intelligent training and interventions—but clinicians need tools they can understand and trust. While chatbots lack the context, life experience, and verbal nuances of human therapists, they have the potential to fill gaps in mental health service provision” (source is the online posting entitled “AI is changing every aspect of psychology. Here’s what to watch for” by Zara Abrams, July 1, 2023).
One vital consideration is whether a human therapist who is supposed to be the eagle or hawk watching over the generative AI will do so diligently and appropriately. There is a strong temptation to potentially let the generative AI roam freely. The effort to check and double-check the interactions that the generative AI has had with a client or patient could be mind-numbing and laborious.
In short, yes, coupling a human therapist with the generative AI for mental health advisement seems a worthwhile and perhaps, for some, a strictly needed requirement. At what cost? To what benefits? Will this stymy efforts to extend and make available AI mental health advisement? Imagine if every such usage of generative AI for this purpose was legally required to include a human therapist.
The emphasis is that we have two such avenues of separate but crucial importance:
- (1) Human therapist that chooses (or not) to use generative AI to augment their mental health practice. This is something of their own to decide. They can believe that generative AI helps. Or they can believe that the generative AI is not worthwhile and opt to not use it with their patients. Note that we might eventually witness patients clamoring to use generative AI with their human therapist and ergo therapists will nearly have to move toward using generative AI in their mental health practice or lose existing or prospective clients.
- (2) Generative AI which alerts that mental health human advisement is needed. AI makers might voluntarily devise and include an internal trigger of the AI that would indicate a human therapist is needed or recommended. If laws are passed on this, it might not simply be a voluntary action but instead become a legally required construct.
What To Do About A Cry For Help
Here’s a quick one for you.
I had previously in a column walked through a situation of using generative AI for mental health and the fictitious example consisted of a prompt that suggested the person using the AI was hinting they might be considering self-harm. See the link here.
What should the generative AI do if a person using the AI appears to signal they are going to do something harmful, whether to themselves or others?
You might immediately say that the AI ought to alert somebody about this. Maybe the AI informs the AI maker. Or maybe the AI connects with governmental agencies and transmits the concern to authorities. Perhaps the AI is to contact a friend or named contact that the person using the generative AI was asked to enter as their emergency contact. Etc.
This seems sensible. The problem is going to be that the active alert could be a false positive. Imagine that these alerts are routinely going off. All manner of users using generative AI are being bombarded with others getting alerted. The usage by the user might be innocent and have nothing to do with a valid mental health concern. We could end up with AI that is continually crying wolf.
Something to think about.
Generative AI And The Vaunted Theory Of Mind
A frequently expressed qualm about using generative AI for mental health advisement is that today’s AI purportedly doesn’t possess a semblance of Theory of Mind (see my in-depth analysis of Theory of Mind at the link here).
Let’s dive into that contention.
First, be aware that the Theory of Mind is essentially a posited capability of being able to put yourself into the shoes of another person. When you interact with someone, the odds are that you are thinking about how the other person thinks. You anticipate what they might say or do. This anticipation or predictive facet is predicated on guessing what is going on in their head.
Second, it is presumed that a human therapist ostensibly employs the Theory of Mind, whether they realize they are doing so or not, by trying to guess what is happening in the mind of the patient or client that they are aiding. It is akin to figuring out a puzzle. A person tells you something and you are aiming to deduce what their mind must be thinking to have led to whatever they have said or done.
This leads us to the Theory of Mind consideration concerning generative AI.
Some AI researchers insist that generative AI cannot formulate a Theory of Mind. The usual claim is that only sentient beings can attain such a lofty capacity. And, since we might reasonably all agree that modern AI is not sentient (despite those banner headlines suggesting otherwise, see my remarks at the link here), this puts AI and generative AI out of the ballpark when it comes to Theory of Mind. By definition, if you argue that only sentient beings can do this, non-sentient AI is presumed to not have this capacity.
Not everyone agrees.
There are AI researchers who argue, as I do, that AI and generative AI can indeed formulate a Theory of Mind capability. Via the use of computational pattern-matching of AI, studies suggest that we can get generative AI to do what seems to be akin to the Theory of Mind, see my discussion at the link here. The AI is able to emit indications that exhibit the capacity to anticipate or predict what might be in a person’s mind. I am not saying that there is mind-reading going on. I am merely saying that predicting or estimating what a person might be thinking does seem to be a computationally sensibly plausible activity to undertake.
I’ll leave it to you to mull that over.
Multi-Modal Generative AI And Mental Health Advisement
One of the biggest bombshells about to break wide open for generative AI and mental health consists of generative AI that is multi-modal.
Readers might recall that last year I had predicted that by the end of this year, we would see the emergence of multi-modal generative AI, see my discussion at the link here. This in fact is happening. By next year, you can expect that pretty much all robust generative AI will be truly multi-modal. This will have dramatic and substantive impacts on the further acceptance and exponential spread of using generative AI for mental health advisement.
We can unpack this.
Most people probably have used generative AI which is only of one mode. For example, ChatGPT when it first was released consisted of a text-to-text mode of operation. You entered text as a prompt and the generative AI responded with text as an output. Another popular variation of generative AI consisted of text-to-artwork. You entered text as a prompt and the response by the generative AI consisted of generated artwork, such as asking for a frog in a top hat dancing on a lily pad and voila, such an art piece would be generated for you.
You can classify most generative AI apps as doing one of these two modes:
- Text-to-text, or
- Text-to-artwork
A multi-modal generative AI is devised to do two or more modes at the same time. We might have a generative AI that can do text-to-text and also perform text-to-artwork. You enter a prompt and ask to get text as a response, or you can ask for artwork as your response. In some cases, you can have the output blended such that the generative AI produces both text and artwork in combination with each other.
Some scoff at this as being multi-modal. They say that until we have more modes available, the multi-modal label is a bit stretched. Well, the good news is that the latest generative AI is going much further into the use of multiple modes.
We’ve got these variations of you entering text and getting these kinds of outputs from the generative AI:
- Text-to-speech
- Text-to-image
- Text-to-video
On top of this, there are these modes of not merely accepting text as input but also accepting other modes of input for generative AI:
- Speech-to-text
- Image-to-text
- Video-to-text
Exciting!
Some describe this as text-to-X, X-to-text, X-to-X, or simply as multi-X modal generative AI.
What does this portend for the use of generative AI for mental health advisement?
Tighten your seatbelts. Imagine this. A person nowadays using generative AI for mental health has to be able to express themselves by typing words on a computer screen. This is laborious. This traps the person in having to be able to type and do so in a manner that bares their soul. Not everyone necessarily has that skill. Nor do many have the patience to write lengthy diatribes about what their issues or problems are.
With the speech-to-text mode, generative AI can simply gather the person’s verbal commentary and respond to that form of input. No laborious typing by the patient, client, or user is needed. They say what is on their mind. Easy-peasy.
Another important twist is that the generative AI can “look” at the person via the use of the image-to-text and video-to-text modes of operation. A human therapist would usually be face-to-face (in-person or remotely) and examine the facial expressions and mannerisms of their patient. The same can be done via generative AI that has additional modes of operation.
People will be able to use generative AI for mental health advisement which will be increasingly extraordinarily easy to use. Sit in front of a computer screen and verbally interact, along with being on camera so that the generative AI can analyze your facial expressions. Or, merely point your smartphone at your face.
This is a far cry from today’s awkward and mechanistic having to endlessly type in your life story with the generative AI. People will flock to this multi-modal generative AI in droves.
E-Wearables Going To Make This Nonstop
We can up the ante on the above-mentioned multi-modal generative AI. I’ll give you one word (or phrase) that is going to rock the world when it comes to generative AI for mental health advisement, namely e-wearables.
You are undoubtedly familiar with e-wearables, though you might not recognize the phraseology. An e-wearable of today would be for example a smartwatch. You put the smartwatch on your wrist and suddenly have available the same features of a smartphone. The difference is that you are wearing it on your person.
The advent of smart glasses was another hoped-for e-wearable. You put on glasses that have computing capabilities along the lines of a smartphone. The glasses can use video streaming input and video record the things you see around you. Some smart glasses also have an audio recording feature too.
Society has taken a dim view of smart glasses (there’s a pun for you). The acceptance of people wearing smart glasses is still being determined culturally. Is it right for someone to be recording everyone else around them? Is it intrusive? Should the smart glasses indicate when they are recording, such as by displaying a red or green light, perhaps buzzing, or otherwise alerting others that they are on a candid camera? And so on.
If you were queasy about smart glasses, get yourself ready for what is coming in the next several months. The rise of e-wearables that are considered digitally infused jewelry or something like that is going to be hitting the market.
The upcoming e-wearables will vary in terms of the form factor, such as:
- Pendants or pins
- Necklaces
- Earrings
- Rings
- Etc.
How do the e-wearables tie into the use of generative AI for mental health?
Many of those e-wearables are going to have generative AI hooked into them, typically on the back end, and be able to perform generative AI-related efforts by using the e-wearable as a sensory device. Let me paint a picture for you. A person is interested in using generative AI for mental health advisement. The normal path would be to log into a laptop or desktop computer to do so. Another means might be the use of their smartphone. We will make life easier for those pursuing that path.
They opt to pay for and subscribe to a service associated with their e-wearable pendant or pin. The pendant is attached to their shirt or upper garment and is always on (if that’s what the person specifies). Everything the person says is being audio recorded and computationally analyzed by the generative AI. The generative AI can respond by using a built-in speaker for output or might send the output to the person’s smartphone or other device.
This is the “surround sound” of mental health advisement as enacted via e-wearables and multi-modal generative AI. A human therapist would not be able to continually run around with their patient and be with them 24×7, nor would a human therapist likely have the stamina or inclination to always be assessing the client on a nonstop 24×7 basis.
Generative AI will be able to work tirelessly and nonstop.
Is this something we want to have happen?
Right now, it is going to be happening and can technologically occur. We aren’t seeing it yet and you’ll need to wait a few months for this to gain traction. An added twist on the twist is that other people that the person wearing the e-wearable comes in contact with will also potentially be under the purview of this always-on generative AI mental health advisement. You could argue that this is good since the more collected info about a patient, including who and how they interact, might be useful for analysis purposes.
The other people might not see this as something that they sought to participate in. Privacy intrusion goes through the roof.
Ponder all of the ramifications.
When The AI Is On A Server Versus On The Edge
Got a fast one for you.
Currently, most generative AI requires gobs of computer processing capacity. These large language models (LLM) are run on servers in faraway warehouses packed with computer systems and usually are accessed via a cloud service. You therefore need a solid network connection to use generative AI since the stuff you type goes through a network to reach the computers running the generative AI.
Efforts are underway to devise small AI models that can roughly do much of the same as the larger ones. The advantage of this is that they can run on smaller computers. This is said to be done at the edge. The edge is the device that might be your smartphone, smartwatch, or laptop. You will be able to download the generative AI into the device and not need to be tethered to a network connection.
Another potential advantage and a big issue these days is that the generative AI could then potentially confine any of your entered prompts to the edge device, rather than having it go up to the larger system and be pushed across a network. This could boost privacy aspects when using generative AI but don’t blindly assume that the privacy concerns are entirely solved via this path.
Issues Of Persistent Context And Transitory Memory
When using generative AI, most of the time the AI app will be working on a transitory basis. You enter your prompts and once you finish or close a conversation that you are having, the bulk of what happened is no longer readily available for the generative AI (the data is often stored separately but not as part of an ongoing active dialogue).
The implications are that using generative AI for mental health advisement is somewhat weak or limited since the AI is not particularly building up a profile or logged history of you. The generative AI is not keeping an ongoing context. Each time you are forced to start anew, as though you’ve never conversed with the generative AI before.
There are efforts to deal with this transitory limitation. A more persistent context can be established, see my coverage at the link here.
Passive Sensing To Determine Mental Health
Get ready for an emerging feature that is either quite useful or disturbingly eerie.
AI researchers are exploring whether passive sensing of a person who is using generative AI might provide useful indications about the person. For example, the speed at which you type words, the time of day that you use the AI, and other considered passive aspects are being recorded and can be used by the AI.
You should expect that the passive sensing realm will dovetail into the use of generative AI for mental health advisement. The logic is that the more the AI can discern about a person, presumably the “better” the mental health advisement that can be derived.
When Generative AI Mental Health Advice Strings You Along
Suppose that someone decides to use generative AI for mental health advisement. They like doing so. Things seem to be going well.
At what juncture will the person stop using the generative AI for this purpose?
You can certainly ask the same question about seeking mental health advice from a human therapist. Where is the stopping point? One viewpoint is that the therapist should indicate when the therapy is no longer required. Another viewpoint is that making use of therapy is a lifelong endeavor. And so on.
In a twist, it is certainly possible that the generative AI will just keep on going and do nothing to curtail or conclude the mental health advisement. Until the person decides they no longer want to use the generative AI for that purpose, the AI is there and able to be used. It is up to the person to decide. Some would argue that maybe the generative AI should not be so appealing. Perhaps there ought to be built-in guardrails that tell the person they no longer need to use the AI in that fashion.
Yet another dilemma to be resolved.
Debating About Emotion And Empathy Of Generative AI
A popular quip is that generative AI is unsuited for serving as a mental health advisor since it lacks emotion and empathy. That is the drop of the mic used in many such debates.
This takes us back to the argument that only humans can aid other humans in this realm. It is claimed that generative AI cannot formulate a human bond with the person seeking assistance. And, worse still, if we go the AI route, there will be a vast and disturbing dehumanization of mental health advisement.
Consider these similar points in this research article:
- “One of the obvious costs associated with replacing a significant number of human doctors with AI is the dehumanization of healthcare. The human dimension of the therapist-patient relationship would surely be diminished. With it, features of human interactions that are typically considered a core aspect of healthcare provision, such as empathy and trust, risk being lost as well” (source article entitled “Is AI the Future of Mental Healthcare?” by Francesca Minerva and Alberto Giubilini, Topoi, May 2023).
But there is the other side of the coin, as further mentioned by the same research:
- “Sometimes, the risk of dehumanizing healthcare by having machines instead of persons dealing with patients might be worth taking, for instance when the expected outcomes for the patient are significantly better. However, some areas of healthcare seem to require a human component that cannot be delegated to artificial intelligence.”
The part that perhaps cannot be relegated to the AI is the emotion and empathy element:
- “In particular, it seems unlikely that AI will ever be able to empathize with a patient, relate to their emotional state or provide the patient with the kind of connection that a human doctor can provide. Quite obviously, empathy is an eminently human dimension that it would be difficult, or perhaps conceptually impossible, to encode in an algorithm.”
My claim is that we can get AI to exhibit a semblance of emotion and empathy, see my in-depth discussion at the link here. Whether this is on par with whatever occurs inside human minds is a different question. I am referring to the display of or exhibition of those facets.
The cited AI research paper identifies that there is a possibility for this:
- “It is possible that further down the line, we will be surprised by what the use of AI in psychiatry can achieve, just as 20 or 30 years ago we would have been surprised if someone had claimed that smartphones were going to become such a big part of our lives, or that AI was going to become so prominent in academic discussion.”
Be on the watch for the use of AI that simulates or exhibits seemingly human emotion and empathy.
The Taboo Topic Of Imperfect Human Therapists
I’ve earlier noted that there are concerns about generative AI that go awry when providing mental health advice. The usual unstated basis for comparison is that a human therapist won’t go askew. That’s the silent assumption. We assume that a human therapist will act perfectly (a cursory look at the legal literature on malpractice claims in mental health advisement suggests that this isn’t always the case).
We readily concede that generative AI will act imperfectly.
Some argue that we need to also acknowledge that human therapists can act imperfectly:
- “One way to approach the question is to consider how poorly more traditional ways of approaching mental health have done, compared to other areas of health care. The benefits of AI use in psychiatry need to be assessed against the performance of human therapists and pharmaceutical interventions. If the bar they set is relatively low, then meeting the challenge for AI might be easier than one might think.”
- “At a global level, poor mental health is estimated to cost $2.5 trillion per year comprising costs of treating poor health and productivity losses. On some estimates, the cost is expected to rise to $6 trillion by 2030.”
- “In sum, despite all the efforts made so far to achieve better outputs for patients, little progress has been made, and indeed, it seems that things have gotten worse” (source article for these excerpts is entitled “Is AI the Future of Mental Healthcare?” by Francesca Minerva and Alberto Giubilini, Topoi, May 2023).
Stew on that as another consideration in these hefty matters.
Marketing Hype When AI Is In The House
As generative AI is increasingly used for mental health advisement, we need to be on our toes about hyped claims that vendors might use about this as a means of possibly misleading or fooling someone into using such tools.
Consider an interesting study entitled “Marketing and US Food and Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical Devices: A Systematic Review” by Phoebe Clark, Jayne Kim, Yindalon Aphinyanaphongs JAMA Network Open, June 2023) that said this:
- “The marketing of health care devices enabled for use with artificial intelligence (AI) or machine learning (ML) is regulated in the US by the US Food and Drug Administration (FDA), which is responsible for approving and regulating medical devices. Currently, there are no uniform guidelines set by the FDA to regulate AI- or ML-enabled medical devices, and discrepancies between FDA-approved indications for use and device marketing require articulation.”
- “This systematic review found that there was significant discrepancy in the marketing of AI- or ML-enabled medical devices compared with their FDA 510(k) summaries. Further qualitative analysis and investigation into these devices and their certification methods may shed more light on the subject, but any level of discrepancy is important to note for consumer safety. The aim of this study was not to suggest developers were creating and marketing unsafe or untrustworthy devices but to show the need for study on the topic and more uniform guidelines around marketing of software heavy devices.”
Besides the FDA, there is also the FTC that enters into this realm as a result of potentially false or fraudulent claims about what AI can achieve, see my coverage about FTC enforcement in the AI arena at the link here.
Trying To Assess Mental Health Apps For Their Merits
You might find it interesting that there are various efforts to try and assess the veracity of mental health apps all told (regardless of whether those apps use AI or not). Few of these efforts seem to have yet extensively incorporated the AI component as a notable element to be deeply assessed.
Let’s briefly take a look at some of the assessment approaches.
The American Psychiatry Association (APA) has put together a formulation known as their App Advisor to aid in being able to assess mental health apps. Here’s what the APA says on the website for the tool:
- “Many of the claims by mental health apps have never actually been studied or evaluated in feasibility or clinical trials. The FDA has taken a largely hands-off approach to regulating these apps, and there is currently little-to-no overnight of mental health apps. This can leave the user to distinguish a useful, safe, and effective app from an unhelpful, dangerous, and ineffective one.”
- “APA is helping psychiatrists and other mental health professionals navigate these issues by pointing out important aspects you should consider when making an app selection and determining whether an app works for you and your patients. The material provided here covers: (1) why it is critical to assess an app, (2) how to evaluate an app, and (3) an opportunity to seek additional guidance on apps and/or the evaluation process. It is not intended to provide a recommendation, endorsement, or criticism of any particular app, but rather serves as a tool for you to do your own evaluation of any app you might be considering.”
The current instance of the approach doesn’t seem to especially assess the AI element but does provide an overarching indication of other typical factors when assessing apps overall, such as privacy features, security features, usability, and the like.
Another assessment method or approach is given the name of FASTER as indicated in a draft technical report entitled “Evaluation of Mental Health Mobile Applications” (Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services):
- “Mental health mobile applications (apps) have the potential to expand the provision of mental health and wellness services to traditionally underserved populations. There is a lack of guidance on how to choose wisely from the thousands of mental health apps without clear evidence of safety, efficacy, and consumer protections.”
- “The Framework to Assist Stakeholders in Technology Evaluation for Recovery (FASTER) to Mental Health and Wellness was developed and comprises three sections: Section 1. Risks and Mitigation Strategies: assesses the integrity and risk profile of the app; Section 2. Function: is focused on descriptive aspects related to accessibility, costs, developer credibility, evidence and clinical foundation, privacy/security, usability, functions for remote monitoring of the user, access to crisis services, and artificial intelligence; and Section 3. Mental Health App Features: focuses on specific mental health app features such as journaling, mood tracking, etc.”
I anticipate we will see more of these approaches being brought forth, along with adding the assessment of the AI elements involved, and could become part of an AI soft law or AI hard law consideration in these matters.
Conundrum About Medical Devices And Where Software Fits
Generative AI is considered to be a piece of software. It usually runs on general-purpose hardware in the sense that even if specialized computational servers are used, they are still categorized as being overall computing devices. Here’s why this is important and relevant here. If generative AI is being used for mental health advisement, we might ask whether the AI software comes under the provisions of being a medical device.
By convention, medical devices have traditionally been considered as highly tailored hardware that would do things such as measure blood pressure or contain other sensors. The software needed to run those capabilities was given a secondary status. The focus of attention was primarily on the hardware. Gradually, the software has gotten more attention.
For those of you who relish legal word splitting, an argument can be made that generic generative AI running on servers is unlike the dedicated software that runs on a tailored medical device, and therefore the generative AI falls outside the scope of legal rules associated with medical devices.
Here is how medical devices are defined as part of the FDA scope:
- “FDA’s regulatory oversight of medical device software applies to software that meets the definition of ‘device’ in section 201(h)(1) of the Federal Food, Drug, and Cosmetic Act (FD&C Act) to include “an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including any component, part, or accessory, which is – (A) recognized in the official National Formulary, or the United States Pharmacopoeia, or any supplement to them, (B) intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or (C) intended to affect the structure or any function of the body of man or other animals, and which does not achieve its primary intended purposes through chemical action within or on the body of man or other animals and which is not dependent upon being metabolized for the achievement of its primary intended purposes. The term ‘device’ does not include software functions excluded pursuant to section 520(o) of the FD&C Act” (source is the report entitled “The Software Precertification (Pre-Cert) Pilot Program: Tailored Total Product Lifecycle Approaches And Key Findings”, U.S. Food & Drug Administration (FDA), September 2022).
A question to be considered is whether we can essentially have software as a medical device (SaMD):
- “Software as a Medical Device (SaMD) is increasingly being adopted throughout the healthcare sector. These devices are developed and validated differently than traditional hardware-based medical devices in that they are developed and designed iteratively and can be designed to be updated after deployment to quickly make enhancements and efficiently address issues, including malfunctions and adverse events.
- “In 2017, FDA recognized that the current device regulatory framework, enacted by Congress more than 40 years prior and incrementally updated since then, had not been optimized for regulating these devices.”
- “The digital health sector continues to grow as interoperable computing platforms, sensors, and software improve. In particular, software is increasingly being used in the treatment and diagnosis of diseases and conditions, including aiding clinical decision-making, and managing patient care. From fitness trackers to mobile applications, to drug delivery devices that track medication adherence, software-based tools can provide a wealth of valuable health information and insights.”
The bottom line is whether agencies such as the FDA will be able to step into the widening use of generative AI for mental health advisement or whether such technology falls outside of the existing legally mandated scope.
Mental Health Apps In The Thousands With No End In Sight
The sky is the limit.
That’s what I tell people when they ask me about how many AI-infused mental health apps there are or that might we see coming into the marketplace. Right now, by and large, most mental health apps have little or no AI incorporated into them. Furthermore, keep in mind that the definition of AI is highly variable, and therefore it is easy to proclaim that AI is being used in such an app, despite the AI having little or no substantive capability.
The National Institute of Mental Health (NIMH) said in a posting entitled “Technology and the Future of Mental Health Treatment” (June 2023), these remarks about mental health apps overall:
- “Thousands of mental health apps are available in iTunes and Android app stores, and the number is growing every year. However, this new technology frontier includes a lot of uncertainty. There is very little industry regulation and very little information on app effectiveness, which can lead people to wonder which apps they should trust.”
The same posting noted these facets:
- “Technology has opened a new frontier in mental health care and data collection. Mobile devices like cell phones, smartphones, and tablets are giving the public, healthcare providers, and researchers new ways to access help, monitor progress, and increase understanding of mental well-being. Mobile mental health support can be very simple but effective.”
- “New technology can also be packaged into an extremely sophisticated app for smartphones or tablets. Such apps might use the device’s built-in sensors to collect information on a user’s typical behavior patterns. Then, if the app detects a change in behavior, it can signal that help is needed before a crisis occurs. Some apps are stand-alone programs designed to improve memory or thinking skills. Other apps help people connect to a peer counselor or a health care professional.”
I suggest that as generative AI gets added to mental health apps, the interest in and the use thereof will further widen and skyrocket.
Conclusion
I hope that you found engaging and informative my rundown on some of the key trends and insights associated with using generative AI for mental health advisement.
A final thought on this topic for now.
Horace Walpole, the famous English writer and historian, said this about Pandora’s box: “When the Prince of Piedmont, later Charles Emmanuel IV, King of Sardinia, was seven years old, his preceptor instructing him in mythology told him all the vices were enclosed in Pandora’s box. “What! All!” said the Prince. “Yes, all.” “No,” said the Prince; “curiosity must have been without.”
Are we going too far by using generative AI for mental health advisement?
Or, as some might compelling argue, have we not gone far enough and need to do more?
Stay tuned.
Source: https://www.forbes.com/sites/lanceeliot/2023/11/02/generative-ai-for-mental-health-is-upping-the-ante-by-going-multi-modal-embracing-e-wearables-and-a-whole-lot-more/