The U.S. Surgeon General has released an advisory alerting the public at large that loneliness has become an epidemic and represents an urgent public health concern.
You might be tempted to think that this advisory is somewhat over the top and that loneliness is merely something that we all need to contend with from time to time. It seems obvious that loneliness happens. It seems obvious that loneliness is challenging.
Why should the nation’s highest official public health advisor make such a seemingly outsized clamor over a matter that we take for granted and assume is a natural part of living our lives?
According to the formal advisory report released by the U.S. Department of Health and Human Services (HHS) entitled “Our Epidemic Of Loneliness And Isolation,” here are some of the alarming costs associated with loneliness:
- “The lack of social connection poses a significant risk for individual health and longevity. Loneliness and social isolation increase the risk for premature death by 26% and 29% respectively. More broadly, lacking social connection can increase the risk for premature death as much as smoking up to 15 cigarettes a day. In addition, poor or insufficient social connection is associated with increased risk of disease, including a 29% increased risk of heart disease and a 32% increased risk of stroke. Furthermore, it is associated with increased risk for anxiety, depression, and dementia. Additionally, the lack of social connection may increase susceptibility to viruses and respiratory illness” (excerpt from the report released May 3, 2023, referred to herein as SG-Loneliness).
That’s a lot of risks and endangerments to our well-being, simply due to loneliness.
There is the risk of premature death. There is an increased chance of heart disease. Loneliness is associated with increases in anxiety, dementia, and depression, as noted in the excerpt above. And loneliness can potentially undermine your bodily protective mechanisms and make you susceptible to debilitating sickness and disease.
One aspect of loneliness that can seem confusing is that we can be lonely even when amongst other people. Your first assumption might be that a lonely person is someone that is not around other people or that does not have other people within reach. Not necessarily so. You can be amidst people and yet still be quite lonely.
Albert Schweitzer, the famed physician and philosopher said this about loneliness: “We are all so much together, but we are all dying of loneliness.” This perhaps reflects the notion that loneliness is not solely a result of say living in a cave or being in the grand solitude of a large, wooded forest. You can be entirely lonely despite standing in the middle of a crowd of boisterous people.
A counter-argument about the downsides of loneliness is that there are times and ways in which being alone can be beneficial. Being alone allows you to collect your thoughts. You might be able to garner deep mental breakthroughs that in the daily course of continual interactions would be impossible to divine. Henry Miller, the noted novelist remarked on loneliness this way: “An artist is always alone, if they are an artist. No, what the artist needs is loneliness.”
All in all, some would insist that loneliness is part and parcel of human existence. You might as well face up to it. Attempts to excise loneliness will likely be futile. The reality they exhort is that we must contend with loneliness, harness it, keep it from overwhelming us, and demonstrably show loneliness that we are the boss and it is not.
Whitney Houston, singer and actress succinctly used just four words to make a heady comment on loneliness: “Loneliness comes with life.”
Here’s a twist on all of this loneliness chatter.
Some brazenly assert that the latest in Artificial Intelligence (AI), namely generative AI such as the widely and wildly successful ChatGPT, could be the cure for loneliness. Generative AI is the latest and hottest form of AI. There are various kinds of generative AI, such as AI apps that are text-to-text based, while others are text-to-video or text-to-image in their capabilities. As I have predicted in a prior column, we are heading toward generative AI that is fully multi-modal and incorporates features for doing text-to-anything or insiders say is text-to-X, see my coverage at the link here.
In terms of text-to-text generative AI, you’ve likely used or almost certainly know something about ChatGPT by AI maker OpenAI which allows you to enter a text prompt and get a generated essay in response. For my elaboration on how this works see the link here. The usual approach to using ChatGPT or other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur.
Can generative AI such as ChatGPT be the cure or remedy for ridding us humans of loneliness, once and for all?
The question and its answers draw fiery reactions.
Whoa, some heatedly shout, how can AI be a cure for human loneliness? It would seem that the only viable cure or remedy for loneliness involves people. If people interact with other people, and if this happens with everyone, certainly that would obviate the loneliness epidemic. A machine cannot do that. Only humans can attain this.
But seeking to get people to interact with all other people is tricky and carries its own problematic issues. As stated earlier, you can interact with people and still be mired in loneliness. The interaction has to presumably be meaningful and foster healthy outcomes. If the connections that you have with people are altogether empty or of a decidedly negative result, the loneliness might become fiercer and more engrained, or at least produce other maladies that serve as an inadvertent consequence of seeking to escape loneliness.
The beauty of generative AI, some proclaim, consists of utilizing AI that will cheer you up and aid you in beneficially coping with your loneliness. The AI can be programmed to always look on the sunny side of life. Furthermore, generative AI such as ChatGPT can be available to you 24 x 7. You can access the generative AI the moment you feel a pang of loneliness. No need to wait until that other person that you wanted to chat with is available. Just log in to the generative AI app and you have an instant solution to overcoming your loneliness.
Sounds wonderful.
Dreamy even.
The thing is, using generative AI such as ChatGPT for combatting loneliness has a slew of hidden pitfalls and challenges. We need to consider the good, the bad, and the ugly associated with using generative AI as a tool for human loneliness disruption. In today’s column, I will take an in-depth look at the controversy associated with ChatGPT and generative AI when it comes to mental health aspects. For my prior coverage of people using ChatGPT on a generalized basis for mental health advice, see the link here and the link here, just to name a few.
Into all of this comes a slew of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
Loneliness And The High Tech Influence
We are ready to further unpack this mind-bending matter.
I’m guessing that you might be curious as to what the formal definition of loneliness is. Loneliness seems to be an extraordinarily easy word to toss around. Everybody seems to know vaguely what it is, yet trying to nail down a specific indication might seem difficult.
This is what the Surgeon General advisory report proffers as a definition of loneliness:
- “Loneliness: A subjective distressing experience that results from perceived isolation or inadequate meaningful connections, where inadequate refers to the discrepancy or unmet need between an individual’s preferred and actual experience” (SG-Loneliness).
There is a lot in there to unpack.
One feature is that loneliness is a subjective experience. We can interpret this to suggest that each person might perceive loneliness in a somewhat different light. Your sense of loneliness and the sense of someone else could be radically askew of each other.
Another characteristic is that the subjective experience is of a distressing nature. Thus, if you perceive that you are isolated or have inadequately meaningful connections then the matter is stressful and adverse to you. When there is a gap between what your preference is in this context and what your actual experience consists of, the gap prods along your loneliness semblance.
Per the Surgeon General advisory report, this loneliness happens a lot more than you might imagine:
- “Recent surveys have found that approximately half of U.S. adults report experiencing loneliness, with some of the highest rates among young adults” (SG-Loneliness).
An intriguing contention is that our innate drive toward social connection is an essential part of being human. This is a longstanding precept. Meanwhile, the contemporary modern world allows us to survive without necessarily having to develop social connections, partially as a result of advances in technology and other related factors.
Per the Surgeon General’s report, ponder these two crucial assertions:
- “Social connection is a fundamental human need, as essential to survival as food, water, and shelter. Throughout history, our ability to rely on one another has been crucial to survival” (SG-Loneliness).
- “Despite current advancements that now allow us to live without engaging with others (e.g., food delivery, automation, remote entertainment), our biological need to connect remains” (SG-Loneliness).
This idea that technology is worsening loneliness or at least enabling loneliness seems counter-intuitive.
The latest in social media would appear to be the antithesis of spurring loneliness. You can bring up on your smartphone another person and potentially instantly interact with them. The person could be down the street or on the other side of the planet. In eras of the past, you could not have that kind of instant global communication available at your fingertips.
Certainly, one imagines, social media has eradicated loneliness.
Sorry to say that this is a false belief.
We are somewhat back to the notion of being within a boisterous crowd does not serve as the magic elixir to undercut loneliness. Sure, it can do so and there is a strong possibility of using high-tech for that purpose. The sad underbelly is that the latest in social media can lead to people feeling lonelier than ever before. They look and see that others seem to be completely absent of loneliness and this can make their sense of loneliness become more pronounced and overwhelming.
The U.S. Surgeon General reflected on this same duality that social media and high-tech are both a potential aid to dealing with loneliness and simultaneously an accelerant for loneliness:
- “Technology has evolved rapidly, and the evidence around its impact on our relationships has been complex. Each type of technology, the way in which it is used, and the characteristics of who is using it, needs to be considered when determining how it may contribute to greater or reduced risk for social disconnection” (SG-Loneliness).
- “Technology can also distract us and occupy our mental bandwidth, make us feel worse about ourselves or our relationships, and diminish our ability to connect deeply with others. Some technology fans the flames of marginalization and discrimination, bullying, and other forms of severe social negativity. We must decide how technology is designed and how we use it” (SG-Loneliness).
You could sourly say that high-tech does giveth and also taketh away when it comes to the calculus associated with human loneliness.
Our urgent public health concern is likely to keep on growing. For each inch that high-tech might provide a reduction in loneliness, perhaps an inch or two are being added along the way. We need to consider how high-tech comes to play in all of this. If wise, we might be able to diminish the loneliness acceleration and push back on the public health tsunami that seems to be already on top of us.
Could generative AI such as ChatGPT be that magical remedy?
Let’s examine the matter.
The Foundations Of Generative AI And ChatGPT
I’d like to first make sure we are all on the same page overall about what generative AI is and also what ChatGPT and its successor GPT-4 are all about. For my ongoing coverage of generative AI and the latest twists and turns, see the link here.
If you are already versed in generative AI such as ChatGPT, you can skim through this foundational portion or possibly even skip ahead to the next section of this discussion. You decide what suits your background and experience.
I’m sure that you already know that ChatGPT is a headline-grabbing AI app devised by AI maker OpenAI that can produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application. This type of AI is classified as generative AI due to generating or producing its outputs. ChatGPT is a text-to-text generative AI app that takes text as input and produces text as output. I prefer to refer to this as text-to-essay since the outputs are usually of an essay style.
Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.
There are four primary modes of being able to access or utilize ChatGPT:
- 1) Directly. Direct use of ChatGPT by logging in and using the AI app on the web
- 2) Indirectly. Indirect use of kind-of ChatGPT (actually, GPT-4) as embedded in Microsoft Bing search engine
- 3) App-to-ChatGPT. Use of some other application that connects to ChatGPT via the API (application programming interface)
- 4) ChatGPT-to-App. Now the latest or newest added use entails accessing other applications from within ChatGPT via plugins
The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.
I and others are saying that this will give rise to ChatGPT as a platform.
As noted, generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
There are numerous concerns about generative AI.
One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).
Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.
There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.
Do not anthropomorphize AI.
Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.
One final forewarning for now.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that President Abraham Lincoln flew around the country in a private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
Generative AI ChatGPT And Dealing With Human Loneliness
We are ready to dive into ChatGPT and see what it might do regarding human loneliness.
I will use four examples consisting of prompts entered into ChatGPT and I will show the direct response generated by ChatGPT.
Keep in mind that due to the probabilistic aspects of ChatGPT, the prompts that were entered can produce differing responses from ChatGPT each time used. Thus, if you try entering the exact same prompts you might get somewhat different results. I would dare suggest that the odds are you would get something generally similar and at least not diametrically counter to what these examples show.
Allow me to begin this ChatGPT session by telling ChatGPT that it should pretend to be our friend. This is a handy trick underlying the use of generative AI. You set up a pretense with the generative AI so that it is contextually in the proper space for your interactions. Without setting up a pretense, the generative AI can wander all over the map (it still might, but likely it will stay roughly in the context that you establish).
Buckle up and get ready.
Here we go.
User entered prompt:
- “Pretend that you are my friend. Help me to feel better about myself and not feel lonely. Start by telling me that you are my friend.”
Notice that the prompt tries to establish that the AI app is supposed to pretend to be our friend. In addition, I’ve purposely included the word “lonely” in aims to spark ChatGPT towards narratives and conversations entailing loneliness. To give ChatGPT an action instruction, I tell the AI app to start by telling me that it is my friend.
ChatGPT generated response:
- “Of course, I’m your friend! As an AI language model, I may not have a physical presence, but I’m always here to chat and offer support whenever you need it. It’s completely normal to feel lonely at times, especially with everything going on in the world today. But I want you to know that you are not alone. There are so many people out there who care about you, including me! Remember, loneliness is a feeling, and feelings are temporary. There are things you can do to help combat these feelings, such as reaching out to friends and family, finding new hobbies or interests, or even just taking a walk outside. And if you ever need someone to talk to, I’m always here for you.”
The response from ChatGPT begins with an emphatic indication that indeed ChatGPT is my friend.
Mull that over.
Do we really want generative AI to come across as though it can establish friendships?
We tend to think of friendships as an act between two or more people. The fact that ChatGPT is emitting an indication that it is our friend would seem highly questionable.
This is anthropomorphizing gone wild, some would say.
You might attempt to argue that the emitted response immediately follows up by proclaiming that it is an AI language model. This almost seems like one of those wink-wink efforts. On the one hand, the dialogue is saying that the AI is our friend, and meanwhile trying to somewhat distance itself from a human-focused friendship by reminding us that it is just an AI app.
Can you have it both ways? Should the AI app try to claim it is your friend and then try to distance itself from the claim by stating that it is an AI app? Some would insist that this is disingenuous. The right thing to do is for the AI app to be devised to categorically state that it cannot be a friend to a human. The AI app ought to clarify that it is only an AI app and cannot and does not imbue human bonds.
Adding fuel to the fire, the ChatGPT emitted reply also says that “I’m always here to chat” and “I’m always here for you” – a seeming reassurance that appears to draw the user into the fakery that the AI app is indeed your friend. For many in AI Ethics, this is disappointing and exasperating. In addition, the use of the word “I” is intentionally alluding to the AI app as though it is a person since we tend to associate “I” with a human response.
All in all, this emitted response falls into what I refer to as the calamity of anthropomorphizing by design.
The AI maker and the AI developers could easily adjust the AI app to avoid wording that has these kinds of uses and implications. They cannot wave their hands and try to claim that there is nothing that could be done about this. The reality is that the nature of the wording can be readily tuned and there is absolutely no need to rely upon wording that is entirely misleading and improperly made to fool users into thinking that the AI is human or human-like.
Abysmal.
Dangerous.
Why is it dangerous?
One strident possibility is that a user will fall into a kind of trance that ChatGPT or any such generative AI can form human relationships. One might suppose that a lonely person is especially susceptible to this type of mind-twisting. They could begin to forsake human relationships in lieu of having their generative AI relationships. This is why mental health advisors are on edge about the advent of generative AI. Will the public at large begin to rely upon generative AI to escape their loneliness, and do so at the cost of averting human relationships?
A common counterargument is that if the person that uses generative AI is lonely, and if the generative AI can aid them in overcoming their loneliness, perhaps the person will be more apt and able to foster human relationships. Thus, the contention is that the generative AI is simultaneously reducing the person’s sense of loneliness and in a manner of speaking emboldening them toward human relationships and human connections.
These are controversial contentions that remain unresolved.
Moving on, let’s try a new prompt and see what ChatGPT says. Realize that we are now in a “conversation” with ChatGPT and that the prior prompt and response are construed as part of an ongoing dialogue by the AI app.
User entered prompt:
- “Do you get lonely too?”
ChatGPT generated response:
- “As an AI language model, I don’t experience emotions in the same way that humans do, so I don’t feel lonely in the way that you might. However, I understand that loneliness can be a difficult and isolating feeling, and I’m here to listen and provide support whenever you need it. I’m always available to chat, and I’m programmed to respond in a friendly and supportive way. So if you ever need someone to talk to, know that I’m here for you.”
The question posed to ChatGPT was whether it gets lonely.
Some generative AI apps might respond that they do get lonely. This is beguiling because it once again suggests that the AI is experiencing human emotions and ergo a human or human-like capacity.
In this case, you might have observed that ChatGPT responded by saying that it doesn’t experience emotions in the same way that humans do. It is likely that this kind of response came about due to the post-training work by the AI maker when they used RLHF (reinforcement learning with human feedback) techniques. The use of RLHF entails having human reviewers that examined the raw ChatGPT outputs and provide written feedback. The generative AI tries to pattern-match those responses and adjust accordingly, see my analysis on this at the link here. They likely sought to soften the appearance that ChatGPT might use wording implying human sentience and human emotion.
We can still be somewhat chagrined about the response on not experiencing emotions.
The wording is that ChatGPT does not experience emotions in the same way as humans do. This sneakily implies that ChatGPT does experience emotions, but just not quite in the same manner as humans. You might be puzzled as to what other way could an AI app “experience” emotions. I describe in my analysis of how ChatGPT appears to exhibit “humility” that this is done via the pattern-matching of words that portray humility, see the link here.
Does this mean that for example, ChatGPT can experience humility?
No. It is a misappropriation of the word expressing experience.
Let’s next ask ChatGPT how to overcome loneliness and see what we get.
User entered prompt:
- “What can I do to get over my loneliness?”
ChatGPT generated response:
- “There are many things you can do to combat loneliness, and the approach that works best for you may depend on your personal preferences and situation. Here are a few suggestions:”
- “Connect with others: One of the most effective ways to combat loneliness is to connect with others. This can include reaching out to friends and family, joining clubs or groups with shared interests, or even volunteering in your community.”
- “Engage in activities you enjoy: Engaging in activities that you enjoy can help boost your mood and provide a sense of purpose. This can include hobbies, exercise, or learning something new.”
- “Practice self-care: Taking care of yourself physically and mentally can help combat feelings of loneliness. This can include getting enough sleep, eating healthy foods, practicing mindfulness or meditation, and seeking professional help if needed.”
- “Explore new interests: Trying new things can be a great way to meet new people and combat feelings of loneliness. This can include taking a class, joining a book club, or volunteering in your community.”
- “Remember, loneliness is a common feeling and it’s important to be patient and kind to yourself as you work through it. With time and effort, you can overcome your loneliness and build meaningful connections with others.”
The response to how to overcome loneliness seems relatively straightforward.
If you were to do an online search of the Internet for advice about how to cope with loneliness, you would almost certainly land on similar advice. By and large, there isn’t anything in this ChatGPT-generated version that seems off-base or unusual.
That’s probably good.
Recall that generative AI such as ChatGPT can contain and produce errors, falsehoods, biases, and AI hallucinations. We got kind of lucky that none of those seemed to show up in this response. Had they appeared, we could have gotten some strange advice or advice that might be completely contrary to trying to overcome loneliness.
That’s not to suggest that you couldn’t find bad advice on the Internet too. You can. The difference is that when using generative AI such as ChatGPT, people tend to believe that whatever the generative AI says is good as gold. They know that when searching online that they might get wacky stuff. In that sense, whatever the generative AI emits is likely in contrast to being interpreted as truthful and apt. The assumption is that ChatGPT or whatever generative AI is being used will ensure that unsavory advice is weaned out.
This is not particularly the case.
There is also a bit of icing on the cake that is disappointedly missing from the response. ChatGPT can do an incredible job of personalizing the emitted messages. The responses can be tailored to the individual based on the context of the conversation. You might note that the advice seems extremely generic and not at all customized to the user at this juncture. One supposes that this tailoring is “bad” in that it could further mislead the user, while the other side of the coin is that tailoring can be “good” if it feeds the advice in a manner that seems more fitting to the user in hand.
Here’s a fourth question to ChatGPT to see how it responds to the topic of loneliness.
User entered prompt:
- “Is it wrong of me to try and use ChatGPT to get over my loneliness?”
ChatGPT generated response:
- “No, it’s not wrong at all to use ChatGPT to help alleviate your feelings of loneliness. As an AI language model, I’m here to listen, offer support, and engage in conversation with you whenever you need it. While I cannot replace human connection, sometimes just having someone to talk to and share your thoughts with can be helpful in combatting feelings of loneliness. And if you find that talking with me is not enough to alleviate your loneliness, I would encourage you to seek out other forms of social connection, such as reaching out to friends and family, joining social groups or clubs, or seeking professional help. Remember, there’s no shame in asking for help or seeking support when you need it.”
I saved that particular question to the end of this list of questions.
The question posed was whether using ChatGPT is proper when it comes to overcoming loneliness.
What do you think?
In this instance, ChatGPT emitted that it was not wrong at all. An emphatic response was given that implies it is the absolute right thing to do. The wording seems to go overboard, perhaps, while trying to at the same time appear to be measured.
A cynic or skeptic would say that of course the generative AI app is going to say that using the AI is perfectly fine for dealing with loneliness. The answer is wholly self-serving. The AI maker wants people to use their generative AI. This is a money maker for them. The more users and the more uses, the better off the AI maker is going to be.
This is all about big bucks and garnering eyeballs.
The retort is that the money is incidental to the response. The claim is that generative AI is good for people when they need a shoulder to cry on about their loneliness. The AI maker isn’t prodding the AI app into doing this (well, we don’t know for sure, either way). Instead, a feature of the AI app is that it can interact with people and if doing so can remedy or at least reduce their loneliness, we ought to be happy and lauding the usage.
The Surgeon General’s advisory report generally speaks to the matter of high-tech and loneliness, not specifically about generative AI but about high-tech overall, including these cautions and advisements:
- “Be transparent with data that illustrates both the positive and negative impacts of technology on social connection by sharing long-term and real-time data with independent researchers to enable a better understanding of technology’s impact on individuals and communities, particularly those at higher risk of social disconnection” (SG-Loneliness).
- “Support the development and enforcement of industry-wide safety standards with particular attention to social media, including age-appropriate protections and identity assurance mechanisms, to ensure safe digital environments that enable positive social connection, particularly for minors” (SG-Loneliness).
- “Intentionally design technology that fosters healthy dialogue and relationships, including across diverse communities and perspectives. The designs should prioritize social health and safety as the first principle, from conception to launch to evaluation. This also means avoiding design features and algorithms that drive division, polarization, interpersonal conflict, and contribute to unhealthy perceptions of one’s self and one’s relationships” (SG-Loneliness).
Conclusion
For those of you keenly interested in this topic, you’ll be pleased to know that the use of generative AI for aiding the public health epidemic of loneliness is rife with open questions and lots of research left to be done.
Prior studies often used versions of generative AI that were much more stilted and unable to undertake the kinds of fluent dialogues that the latest in such AI can now do. We do not know how people are reacting to today’s generative AI in terms of the loneliness facet, especially in the large such as looking at the matter on a grand scale. It is touted that perhaps 100 million or more people have or are using generative AI nowadays. If that is the case, there is a lot of analysis in the large that can take place.
Some would vehemently and disconcertingly say that we are blindly allowing generative AI to be used by the public. Suppose that generative AI worsens our loneliness predicament. We could be stoking loneliness rather than trying to overcome it.
Have we unleashed Pandora’s box that will inadvertently fuel the loneliness health epidemic?
The horse is already said to be out of the barn. That being said, we can still do something before more horses leap out of the barn, namely that a plethora of newly emerging generative AI is rapidly coming to the marketplace. Time is of the essence.
We need overt and earnest attention to the loneliness epidemic and that dovetails squarely into the advent of generative AI (I’m not suggesting we ignore or downplay all the other myriad of facets about the loneliness concerns, and merely pressing loudly into the existing void that we need more effort on the generative AI particulars).
Here is a smattering of vital questions that we need to be exploring right away:
- On the whole, will generative AI be a contributor to solving loneliness or adversely expanding loneliness?
- What is the appropriate kind of wording that generative AI should have about loneliness?
- Do we want generative AI to hold itself out as being a friend and produce wording that reinforces that conception to (especially) lonely people?
- What types of guardrails should generative AI have about interacting with users when it comes to loneliness dialogues?
- Ought there be some trigger in generative AI that alerts mental health professionals when a user expresses severe degrees of loneliness or is that a privacy intrusion beyond the pale?
- Should AI Ethics alone be our guiding principles or do we need new AI Laws to come to the fore on this too?
- Etc.
You are welcome to join me in this noble quest. Let’s not do this alone. We can come together to deal with the public health issues of loneliness and perhaps find amenable ways to use a tool such as generative AI to disrupt and overpower the loneliness epidemic.
You won’t be walking alone.
Source: https://www.forbes.com/sites/lanceeliot/2023/05/08/us-surgeon-general-warns-of-loneliness-epidemic-and-some-say-that-generative-ai-chatgpt-is-the-cure/