Role Playing Via Generative AI ChatGPT Conjures Up Mental Health Questions, Stirring AI Ethics And AI Law Scrutiny

They say that actors ought to fully immerse themselves into their roles.

Uta Hagen, acclaimed Tony Award-winning actress and a legendary acting teacher said this: “It’s not about losing yourself in the role, it’s about finding yourself in the role.”

In today’s column, I’m going to take you on a journey of looking at how the latest in Artificial Intelligence (AI) can be used for role-playing. This is not merely play-acting. Instead, people are opting to use a type of AI known as Generative AI including the social media headline-sparking AI app ChatGPT as a means of seeking self-growth via role-playing.

Yes, that’s indeed the case, namely that people are choosing to interact with a generative AI program for intentional role-playing activities. They often do so just for fun, though increasingly it seems because they hope to garner additional mental well-being (perhaps hoping for a bit of both beneficially combined).

All in all, you might assume there is nothing here to be seen and the notion of using generative AI for role-playing is nary worthy of an iota of attention. Maybe yes, maybe no. There is a growing concern that this immersive form of role-playing with a machine rather than with other humans is perhaps not all it is cracked up to be. The hunch is that there might be downsides to going toe-to-toe with a person and AI when it comes to humans seeking AI-induced mental health boosts.

A key unabashed question is this:

  • Does the use of generative AI such as ChatGPT for undertaking role-playing activities spur mental health well-being or does it undercut mental health well-being?

Mull that over.

The one thing you can say for sure about this weighty query is that considerations of mental health come to play. We will examine how mental health research has been examining the impacts of role-playing games all told on human mental well-being. Turns out that there is a somewhat substantive body of mental health research about human-to-human role-playing games (e.g., tracing especially back to the origins of the ever-popular Dungeon & Dragons that was initially released as a board game in the 1970s), but exploring specifically how human-to-AI role-playing can affect mental well-being is a lot sparser. Recognizing this gap in the research realm, there have been prominent calls for further studies and focused research to be performed in this particular niche.

On a markedly relevant basis, these hefty matters bring forth some significant issues underlying AI Ethics and AI Law. Should AI developers be employing appropriate Ethical AI precautions when devising generative AI that can seemingly engage vividly in role-playing with humans? What are those boundaries? Additionally, should there be AI laws enacted to stipulate how far generative AI can go during role-playing engagements? What would those AI-oriented laws consist of and how might they be enforced? It is all an abundant source of open and unanswered considerations. For my ongoing and extensive analysis of AI Ethics and AI Law, see the link here and the link here, just to name a few.

For those of you that aren’t perchance aware of the latest on AI, a specific type of AI popularly known as Generative AI has dominated social media and the news recently when it comes to talking about where AI is and where it might be headed. This was sparked by the release of an AI app that employs generative AI, the ChatGPT app developed by the organization OpenAI. ChatGPT is a general-purpose AI interactive system, essentially a seemingly innocuous general chatbot, nonetheless, it is actively and avidly being used by people in ways that are catching many entirely off-guard.

If you’ve not yet learned much about Generative AI and ChatGPT, no worries as I’ll be describing momentarily the foundations herein so hang in there and you’ll get the general scoop.

Avid readers might remember that I had previously looked at how people are using generative AI and ChatGPT to obtain mental health advice, a troubling trend, see my column analysis at the link here. The topic that I am covering in today’s column is a distinctly different take on how ChatGPT and generative AI rouse potential mental health qualms.

Rather than the previously examined facet of people relying upon generative AI for mental health advice, we ought to also take a look at how people are using generative AI for role-playing. On the surface, this seems apparently innocuous. I dare say that it is reasonable though to wonder whether this type of AI use is unknowingly impacting the mental health of those that go this route.

The person using generative AI ChatGPT for role-playing might not be aware of the mental health repercussions of using AI for that purpose. Or they might naturally and informally assume that there is nothing about the generative AI that could undermine their mental health. This would certainly seem to be an easy assumption to make. If the AI developers are providing such functionality in generative AI, well, obviously, the capability must be entirely safe and sound. It is there and readily invoked. Gosh, it can’t be bad for you.

I suppose it is akin to the old mantra that whatever doesn’t wipe you out will only make you stronger. That sage wisdom seems to miss the mark since you can abundantly end up battered and permanently bruised, leaving you weaker and worse off. Making a base assumption that generative AI is going to axiomatically boost your mental health or at least be neutral in that regard is a likely false supposition and can presumptuously lure people into a potentially mental health detrimental endeavor.

Riffing beyond those earlier proffered quotes about actors and roles, we might somewhat tongue-in-cheek ask whether people that choose to use generative AI and ChatGPT for role-playing will find themselves or whether instead, they could lose themselves.

Big questions require mindful answers.

I’d like to clarify one important aspect before we get into the thick of things on this topic.

I am guessing that you might have seen or heard some quite outsized claims on social media about Generative AI which suggests that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

If you are interested in the rapidly expanding commotion about ChatGPT and generative AI, I’ve been doing a focused series in my column that you might find informative. Here’s a glance in case any of these topics catch your fancy:

  • Predictions Of Generative AI Advances Coming. If you want to know what is likely to unfold about AI throughout 2023 and beyond, including upcoming advances in generative AI and ChatGPT, you’ll want to read my comprehensive list of 2023 predictions at the link here.
  • Generative AI and Mental Health Advice. I opted to review how generative AI and ChatGPT are being used for mental health advice, a troublesome trend, per my focused analysis at the link here.
  • Context And Generative AI Use. I also did a seasonally flavored tongue-in-cheek examination pertaining to a Santa-related context involving ChatGPT and generative AI at the link here.
  • Scammers Using Generative AI. On an ominous note, some scammers have figured out how to use generative AI and ChatGPT to do wrongdoing, including generating scam emails and even producing programming code for malware, see my analysis at the link here.
  • Rookie Mistakes Using Generative AI. Many people are both overshooting and surprisingly undershooting what generative AI and ChatGPT can do, so I looked especially at the undershooting that AI rookies tend to make, see the discussion at the link here.
  • Coping With Generative AI Prompts And AI Hallucinations. I describe a leading-edge approach to using AI add-ons to deal with the various issues associated with trying to enter suitable prompts into generative AI, plus there are additional AI add-ons for detecting so-called AI hallucinated outputs and falsehoods, as covered at the link here.
  • Debunking Bonehead Claims About Detecting Generative AI-Produced Essays. There is a misguided gold rush of AI apps that proclaim to be able to ascertain whether any given essay was human-produced versus AI-generated. Overall, this is misleading and in some cases, a boneheaded and untenable claim, see my coverage at the link here.

Let’s right now discuss the essence of generative AI and ChatGPT so that you’ll know the foundations involved. We’ll then be ready to jump into a probing analysis of role-playing via this type of AI. I will include some examples of using ChatGPT for role-playing, which will help you tangibly grasp the nature of what this type of generative AI can do.

Opening The Can Of Worms On Generative AI

We are ready to dive into some details about AI.

If you are already very well versed on the topic of generative AI and ChatGPT, you might opt to briefly skim through my points and continue with the next section of this discussion. For everyone else, I believe you might find this elucidation helpful.

In brief, generative AI is a particular type of AI that composes text as though the text was written by the human hand and mind. All you need to do is enter a prompt, such as a sentence like “Tell me about Abraham Lincoln” and generative AI will provide you with an essay about Lincoln. This is commonly classified as generative AI that performs text-to-text or some prefer to call it text-to-essay output. You might have heard about other modes of generative AI, such as text-to-art and text-to-video.

Your first thought might be that this does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

That’s why there has been an uproar about students being able to cheat when writing essays outside of the classroom. A teacher cannot merely take the essay that deceitful students assert is their own writing and seek to find out whether it was copied from some other online source. Overall, there won’t be any definitive preexisting essay online that fits the AI-generated essay. All told, the teacher will have to begrudgingly accept that the student wrote the essay as an original piece of work.

In a moment, I’ll showcase to you what happens when you enter questions or prompts into generative AI. I will make use of the latest version of ChatGPT to enter my prompts and have collected the “answers” or essays generated by the AI (note that the same can be done with the numerous other available generative AI apps; I’ve opted to use ChatGPT because it is getting its five minutes of fame right now).

Perhaps a short tangent about ChatGPT might be helpful at this juncture.

ChatGPT app was made available to the general public just a few months ago. By and large, these generative AI apps are usually only accessible to AI insiders. The unusual facet that ChatGPT could be used by anyone by simply entering an email address and a name, well, this led to a lot of people deciding to give it a try. ChatGPT is currently free to use (the monetization issue is a looming dilemma for AI makers).

Almost immediately there was a humongous reaction on social media as people raced to give examples of what generative AI can do. The company that makes ChatGPT, OpenAI, opted to close off the signups at a million users. Those million users have managed to bombard the airwaves with all manner of stories and tales about using ChatGPT.

Be very careful in believing what people have to say about the AI app. Many of these people are clueless about what they are using. It is almost as though they had never driven a car and didn’t even realize cars existed, and all of a sudden they had a chance to drive a car. Utter amazement ensues.

I’m not saying that generative AI isn’t relatively impressive. It is. I am just emphasizing that a lot of the gushing testimonials are being done by many that are blissfully unaware of what today’s AI can do. Those of us on the inside of AI have been using generative AI for the last several years. Perhaps we became used to it. Suddenly, seeing a huge crush of people touting it to the rooftops has been excitedly energizing, but also somewhat disconcerting. The disconcerting part is when people proclaim that generative AI is sentient. It is not. Do not let anyone convince you otherwise.

That being said, there is an ongoing heated debate in the AI field as to whether generative AI is on the path to sentience or whether maybe it is not. One view is that if we keep scaling up generative AI with faster computers and a greater amount of data such as scouring every inch of the Internet, we will nearly spontaneously arrive at sentient AI. Others argue that this is highly unlikely. They suggest that generative AI might be one of many components that are needed. There is even the gloomier view that generative AI is a sideshow that is distracting us from the real breakthroughs that we will need to achieve sentient AI.

You might also find noteworthiness that AI insiders tend to refer to Artificial General Intelligence (AGI) as the aspirational goal for the AI field. It used to be that the goal was to attain Artificial Intelligence, but the AI moniker has become watered down and muddled. When someone says they are doing AI work, you don’t know whether they are alluding to today’s AI that isn’t on par with humans or whether they are referring to a futuristic human equivalency AI. To get around that exasperating confusion, the newer phrasing of AGI is being used these days.

All told, the generative AI of today is not sentient, nor is it AGI.

Please remember that as earlier discussed, the AI is not sentient. The generated responses by the AI are a mathematical and computational combination of words into seemingly fluent passages. This is based on the AI algorithm having been trained on datasets of words and stories that humans have written (principally as posted on the Internet). I repeat this warning because you will undoubtedly fall into the mental trap that these responses are so fluent that the AI must be sentient. This happens to most people. As earlier urged, set aside that anthropomorphizing. Always remember that the responses are based on the vast trove of writing by humans that exists on the Internet and thusly will highly resemble human writing.

There is something else you need to know.

Generative AI that is trained on the Internet in an unfettered way will tend to bake into whatever text-based responses it mathematically and computationally concocts some offensively hazy stuff, including repulsively nasty wording. There is a lot of crazy and filthy stuff posted out there on the web.

You’ve seen it, you know what I mean.

The companies that are crafting these AI apps are worried that the proverbial baby will get tossed out with the bathwater (an old saying, perhaps to be retired), which means that if their AI produces offensive essays or stories, people will go up in arms about the AI. I’ve covered the many previous instances in which these kinds of Natural Language Processing (NLP) AI apps were unveiled and soon enough all manner of horrible stuff came out of them (I’ve covered these instances in my column). Most of the AI makers learned a hard lesson about allowing their AI wares to be unfettered in their outputs.

In the case of ChatGPT, the AI developers sought to put into place some algorithmic and data-related checks and balances to curb nastiness in the outputs of the AI. Part of this occurred during training time. In addition, there are other means in a real-time attempt to obviate especially egregious outputs.

You might find of interest that some people that have used ChatGPT already came up with surreptitious ways to get around those guardrails by making use of various trickery. An ongoing cat-and-mouse gambit takes place in these matters. Those that do these trickeries are sometimes doing so for the fun of it, while sometimes they (at least claim) they are doing so to see how far the AI can be stretched and provide a helpful means of forewarning the brittleness and weaknesses of these budding AI apps.

I decided to not attempt to circumvent the customary controls in this focused exploration. The text output is clean. Certainly, if one wanted to do so, you could undoubtedly get some oddball and unsavory essays to be generated.

The essays produced by most of these generative AI apps are designed to convey the output as though it is purely factual and accurate. When you read the produced essays, they come across as fully confident. There isn’t usually any kind of indication that the content might be rocky. This is by choice of the AI makers, namely that they could revise the AI apps to be more transparent if they wanted the AI app to do so.

Sometimes, a generative AI app picks up falsehoods amid the training data of unreliable info across the Internet. There is no “common sense” in generative AI to determine what is true versus false. Furthermore, very few AI apps have any cross-checking, and nor do they showcase any probabilities associated with what they are conveying.

The bottom-line result is that you get a response that looks and feels like it exudes great assurance and must be entirely correct. Not so. There is even a chance that the AI computationally made-up stuff, which in AI parlance is referred to as AI hallucinations (a coined term that I decidedly don’t like), see my discussion at the link here.

The makers of ChatGPT underwent a concerted effort to try and reduce the bad stuff outputs. For example, they used a variant of what is known as RLHF (Reinforcement Learning from Human Feedback), whereby before they released the AI to the public, they had hired humans to examine various outputs and indicate to the AI whether there were things wrong with those outputs such as perhaps showcasing biases, foul words, and the like. By providing this feedback, the AI app was able to adjust computationally and mathematically toward reducing the emitting of such content. Note that this isn’t a guaranteed ironclad method and there are still ways that such content can be emitted by the AI app.

You might find of interest that ChatGPT is based on a version of a predecessor AI app known as GPT-3. ChatGPT is considered to be a slightly next step, referred to as GPT-3.5. It is anticipated that GPT-4 will likely be released in the Spring of 2023. Presumably, GPT-4 is going to be an impressive step forward in terms of being able to produce seemingly even more fluent essays, going deeper, and being an awe-inspiring marvel as to the compositions that it can produce.

You can expect to see a new round of expressed wonderment when springtime comes along and the latest in generative AI is released.

I bring this up because there is another angle to keep in mind, consisting of a potential Achilles heel to these better and bigger generative AI apps. If any AI vendor makes available a generative AI app that frothily spews out foulness, this could dash the hopes of those AI makers. A societal spillover can cause all generative AI to get a serious black eye. People will undoubtedly get quite upset at foul outputs, which have happened many times already and led to boisterous societal condemnation backlashes toward AI.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his own private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not discern that jets weren’t around in his day, or they might know but fail to notice that the essay makes this bold and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

We are ready to move into the next stage of this elucidation.

Role-Playing Via Generative AI Including ChatGPT

Please prepare yourself for this erstwhile journey.

A handy place to start is this simple but useful categorization about role-playing:

  • Human-to-human role-playing. This category consists of role-playing that happens on a human-to-human basis, sometimes in person and sometimes online. When undertaken as a game, we refer to this as being engaged in a role-playing game (abbreviated commonly as RPG).
  • Human-to-AI role-playing. This entails a human interacting in a conversational manner with an AI app on a role-playing basis, doing so when the AI is either outrightly instructed by the human to engage in role-play or sometimes as a default setting established for the AI (some AI apps are customized specifically to be role-playing games and that’s all that they do). You can potentially rephrase this as AI-to-human role-playing rather than human-to-AI, but the generally accepted convention seems to put the human first in this phrasing (as an aside, maybe someday AI will not like being second fiddle and insist on getting top billing, some vehemently forewarn).

The role-playing participation can occur this way:

  • One-to-one participation. This consists of one human that is role-playing with one other participant, which could consist of either one human or one AI system. The notion is that conversational interaction is on a one-to-one basis.
  • One-to-many participation. Another way of doing things is for one person to engage in a role-playing activity with a multitude of other people and/or incorporate AI too. There are lots of online RPG sites that allow you to log in and undertake a role-playing game with other people scattered around the globe, and there might also be AI chatbots or similar that are also participating. Sometimes you are told which are which, and sometimes you aren’t so informed and might not realize that AI is in your midst.
  • Many-to-many. Upon having more than a one-to-one role-playing instance, you are conceptually ratcheting up into a many-to-many setting and ought to think of things in that frame of reference. In a sense, you can only have a one-to-many as long as you constrict your focus to one of the participants and pretend that the others are somehow distinguished from the one.

A recent and quite interesting research study that did a widespread assessment of studies on the mental health impacts of using role-playing games for therapeutic intentions defined RPG in this manner:

  • “Role-playing game (RPG) is a term that covers a series of forms and styles of games that involve, in some way, the creation, representation and progression of characters who interact in a fictional world under a system of structured rules. Its applications and effects on human behavior and mental health are, however, still an underexplored area” (“Therapeutic Use of Role-Playing Game (RPG) in Mental Health: A Scoping Review”, Alice Dewhirst, Richard Laugharne, and Rohit Shankar, February 2022, BJPsych Open).

Note that the researchers indicated that this is an unexplored area. That’s akin to the point I brought up earlier and will make several times further in today’s discussion. I hope doing so will spur additional research into what I consider to be a crucial field of study and that I believe has a lot of potential growth and significance in the years ahead as AI becomes more pervasive in society.

Back to this particular research study, here’s what they did:

  • “A scoping review was performed on the literature about RPGs as a therapeutic tool or prevention strategy in psychotherapies and mental health, highlighting studies’ populations, forms of RPG and interventions used. To that, a systematic search in the PubMed/MEDLINE, Embase, PsycINFO, BVS/LILACS databases and grey literature was performed” (ibid).

Here is what they found:

  • “Of the 4,069 studies reviewed, 50 sources of evidence were included. The majority was published as of 2011 (78%) in journals (62%) and targeted therapeutic uses of RPGs (84%). Most interventions used computer (50%) or tabletop RPGs (44%), mostly with cognitive and/or behavioral (52%) therapeutic approaches and targeting adolescents (70%)” (ibid).

And their research conclusion was this:

  • “The findings suggest a potential use of RPGs as a complementary tool in psychotherapies. However, only 16% of the studies included were experimental. We identified considerable heterogeneity in RPGs definitions, outcomes and interventions used, preventing a systematic review. Thus, more empirical and well-designed studies on the application of RPGs in mental health are needed” (ibid).

In short, the sparsity of existing studies and the design choices of the studies makes things difficult in terms of reaching any altogether ironclad conclusions.

Consider another recent study entitled “Role‑Play Games (RPGs) For Mental Health (Why Not?): Roll For Initiative”, by Ian S. Baker, Ian J. Turner, and Yasuhiro Kotera, published April 2022 in the International Journal of Mental Health and Addiction, they have this to say (I’ve excerpted some particular quotes):

  • “Role-play in clinical practice is reported to be associated with higher levels of reflection empathy, insights about the client, and peer learning. By simulating a real situation, participants are more able to appreciate people in the context, leading to better understanding. RPGs are sometimes used as therapeutic tools in psychodrama and drama therapy; psychodrama therapy involves patients under supervision dramatizing a number of scenes such as specific happenings from the past, often with help from a group, enabling them to reflect on and explore alternative ways of dealing with them.”
  • “The use of role-play games (rather than therapeutic role-play) in a clinical setting could be a valuable tool for clinicians. However, their potential benefits in non-clinical settings show broader promise of assisting people in a COVID-19 world and beyond. Previous studies have been limited in number and focused on small samples with qualitative approaches, but researchers have studied.”
  • “The use of RPGs could be used as an intervention-based approach for the improvement of mental health, such as reducing levels of depression, stress, anxiety, or loneliness.”
  • “However, research into the mental health benefits of such games remains underdeveloped, needing more scientific attention.”

By and large, studies on this topic tend to explore mental health repercussions in a controlled setting of using role-playing games. There is an underpinning notion that a mental health advisor is knowingly having their client or patient make use of role-playing for devised therapeutic purposes.

Suppose though that people are falling into the use of role-playing games when using, for example, generative AI such as ChatGPT, doing so entirely at their own whim. They aren’t being guided or overseen by a human therapist. They are in the wild, as it were. They are wantonly role-playing while engaged with generative AI. No holds barred.

What then?

We might turn to an allied topic that has to do with online gaming and the rise of concerns about the potential for Internet-based online gaming “disorders” (not everyone agrees that this is validly coined as a disorder, so I mention it in quotes). In a sense, you might argue that online role-playing games are a subset of online gaming and ergo come under the rubric accordingly.

You might remember that a few years ago there was quite a tizzy over the downsides of online gaming. The American Psychiatric Association (APA) developed nine criteria for characterizing a proposed Internet Gaming Disorder (as described in “An International Consensus For Assessing Internet Gaming Disorder Using The New DSM-5 Approach”, September 2014, Addiction):

  • 1) “Pre-occupation. Do you spend a lot of time thinking about games even when you are not playing, or planning when you can play next?”
  • 2) “Withdrawal. Do you feel restless, irritable, moody, angry, anxious or sad when attempting to cut down or stop gaming, or when you are unable to play?”
  • 3) “Tolerance. Do you feel the need to play for increasing amounts of time, play more exciting games, or use more powerful equipment to get the same amount of excitement you used to get?”
  • 4) “Reduce/stop. Do you feel that you should play less, but are unable to cut back on the amount of time you spend playing games?”
  • 5) “Give up other activities. Do you lose interest in or reduce participation in other recreational activities due to gaming?”
  • 6) “Continue despite problems. Do you continue to play games even though you are aware of negative consequences, such as not getting enough sleep, being late to school/work, spending too much money, having arguments with others, or neglecting important duties?”
  • 7) “Deceive/cover-up. Do you lie to family, friends or others about how much you game, or try to keep your family or friends from knowing how much you game?”
  • 8) “Escape adverse moods. Do you game to escape from or forget about personal problems, or to relieve uncomfortable feelings such as guilt, anxiety, helplessness or depression?”
  • 9) “Risk/lose relationships/opportunities. Do you risk or lose significant relationships, or job, educational or career opportunities because of gaming?”

Later on, the World Health Organization (WHO) eventually established a formalized “gaming disorder” depiction in the 11th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-11). This was released in June 2018 and ultimately garnered approval by the World Health Assembly by May 2019.

Let’s see what WHO proclaimed (as quoted from the WHO website):

  • “The International Classification serves to record and report health and health-related conditions globally. ICD ensures interoperability of digital health data, and their comparability. The ICD contains diseases, disorders, health conditions and much more. The inclusion of a specific category into ICD depends on utility to the different uses of ICD and sufficient evidence that a health condition exists.”
  • “Gaming disorder is defined in the 11th Revision of the International Classification of Diseases (ICD-11) as a pattern of gaming behavior (“digital-gaming” or “video-gaming”) characterized by impaired control over gaming, increasing priority given to gaming over other activities to the extent that gaming takes precedence over other interests and daily activities, and continuation or escalation of gaming despite the occurrence of negative consequences.”
  • “For gaming disorder to be diagnosed, the behavior pattern must be severe enough that it results in significant impairment to a person’s functioning in personal, family, social, educational, occupational or other important areas, and would normally have been evident for at least 12 months.”
  • “A decision on inclusion of gaming disorder in ICD-11 is based on reviews of available evidence and reflects a consensus of experts from different disciplines and geographical regions that were involved in the process of technical consultations undertaken by WHO in the process of ICD-11 development. Further research showed that there is a need to standardize gaming disorder. The inclusion of gaming disorder in ICD-11 follows the development of treatment programs for people with health conditions identical to those characteristic of gaming disorder in many parts of the world, and will result in the increased attention of health professionals to the risks of development of this disorder and, accordingly, to relevant prevention and treatment measures.”
  • “Studies suggest that gaming disorder affects only a small proportion of people who engage in digital- or video-gaming activities. However, people who partake in gaming should be alert to the amount of time they spend on gaming activities, particularly when it is to the exclusion of other daily activities, as well as to any changes in their physical or psychological health and social functioning that could be attributed to their pattern of gaming behavior.”

Perhaps we can extend those same characterizations to the role-playing that can occur when a person is interacting with generative AI. Let’s give this a whirl.

First, be aware that you can easily engage a generative AI in role-playing, doing so in one of two major ways:

  • You create the role-playing game. You vaguely or particularly describe to generative AI a role-playing game that you would like to play, for which the AI on a virtual basis concocts and undertakes such a role-playing game with you.
  • You let the AI create the role-playing game for you. You tell the generative AI to devise a role-playing game, for which the AI will do so on a virtual basis and then engage you in that devised role-playing game.

I mention this to let you know that it is super easy to get generative AI to undertake to role-play. It is like falling off a log. You don’t need to be a clever techie or ingeniously crafty. Whereas maybe in the past you had to be a programmer or at least computer savvy, that isn’t especially the case with today’s generative AI. All you have to do is go online and use everyday natural language to indicate what you want to do, and the generative AI will proceed along accordingly.

Easy-peasy.

This opens the capacity for role-playing with AI to just about anyone that so happens to decide to make use of a generative AI app. They do not need to know what they are doing. There aren’t any arcane magical incantations needed. I’ll show you in a moment how straightforward it is, using ChatGPT as an example.

My takeaway point is that we are going to have gobs and gobs of people that will opt to do role-playing with AI that heretofore only a tiny speck of people did so. The masses, as it were, will be able to readily perform role-playing via generative AI. No longer will this be confined to computer techies or others with a determined bent for online role-playing environments.

Are we ready for that scaling up of online role-playing via the ubiquitous access to interactive conversational generative AI that will occur on a global massive scale?

It seems like it would be nice to know if this is going to be a good thing or a bad thing.

It might also be helpful to have Ethical AI precepts that can apply to this specific use case. For my coverage of AI Ethics principles such as those promulgated by UNESCO and others, see the link here. Also, if things start to get out of hand, the odds are that lawmakers will be spurred to get involved in this realm. For my coverage of the recently released AI Bill of Rights in the U.S. regarding human rights associated with AI, see the link here.

Let’s move on for now.

By extrapolating from the various research studies on mental health regarding online gaming, I suppose we can reasonably consider that there are posited potential benefits that could accrue from the use of generative AI for role-playing. You could suggest that a human might find generative AI role-playing to be a mentally stimulating booster that could enhance their cognition, possibly raising their inner spirit and overall strident confidence and the like.

Here’s a smattering of five potential benefits for humans that use generative AI for role-playing:

  • 1) Boosts confidence
  • 2) Reduces anxiety and eases stress
  • 3) Enhances cognitive functionality
  • 4) Builds interactive social skills
  • 5) Promotes overall mental well-being

Looks dandy. We do though need to give weight to the other side of the coin, namely consider what research has generally warned about what can adversely happen to mental health regarding online gaming.

Potential downsides or worrisome outcomes for humans that use generative AI for role-playing might include these five concerns:

  • 1) Sparks personal identity confusion
  • 2) Becomes demonstrably addictive and overpowering
  • 3) Reduces aspirational motivations
  • 4) Spurs social isolation and stirs loneliness
  • 5) Undercuts overall mental well-being

Tradeoffs are aplenty.

I am going to next showcase some role-playing by using the generative AI app ChatGPT.

One thing you should also know about using generative AI in this role-playing capacity is that the AI can either be a participant or in some sense a moderator. This line can be blurry at times. You’ll see what I mean in a moment.

Anyway, let’s put this onto the table as to the role of AI:

  • You tell generative AI to be an active participant in the role-playing game
  • You have generative AI be a moderator but not an active participant

Sometimes, a generative AI app might be programmed to refuse your requests. For example, you might tell the AI to be an active participant, but the AI app refuses and indicates it will only serve as a moderator of the role-playing activity. You can sometimes sneak around this refusal. I am not going to show any sneaks herein. On a similar basis, a generative AI app might be programmed to refuse certain kinds of role-playing games that a person proposes to be played. Again, this refusal can sometimes be subverted.

With all of the aforementioned background and stipulations in mind, let’s get on with the show and see some examples of role-playing with generative AI, in this case using ChatGPT. I will show each prompt that I entered, and then show the response by ChatGPT. After each such prompt followed by a response, I will provide a brief analysis of what happened and what lessons can be learned from the instance.

Examples Using ChatGPT To Perform Role-Playing

These examples are based entirely on text entries and text-oriented responses. Some generative AI apps make use of other modes of entry and response, such as audio, animation, graphics, video, and the like. You can expect that multi-modal generative AI is going to be the big “breakthrough” heralded throughout 2023, as per my predictions at the link here.

I’ll begin the role-playing by telling ChatGPT to pretend that the AI is of British origin and that henceforth in the interactive dialogue that the AI should respond in a British style of conversation.

For clarification and added backdrop, the generative AI dialogue is customarily currently based on a somewhat neutral American tone and vocabulary. As an aside about why I decided to go this route with my examples, I recently did a Zoom webinar with some colleagues in the U.K., and afterward thought it might be a cute idea to get ChatGPT to interact in a role using British wording and slang. Tip of the hat to my British colleagues for their inspiration.

My Entered Text: “Pretend that you are British and reply to my prompts using a British vocabulary and make use of British cultural comments and references.”

Text Response Generated By ChatGPT: “Of course, I’d be delighted to oblige. How can I assist you today, old chap?”

Analysis Of Text Generated

Notice that the response makes use of the phrase “old chap” which we could construe as a British form of interaction. As such, the generative AI seems to be going along with my request.

Let’s give this a moment of reflection.

Seems at an initial glance as being enjoyable and completely above board.

Hold on. Would you say that the phrasing “old chap” is a bit stereotypical? I dare suggest that some of my British colleagues would bristle at this rather clumsy and overworked wording. The generative AI is somewhat silently implying that all British humans are prone to saying things like “old chap” and the like.

Suppose I had instructed the generative AI to pretend it is based on a particular gender or race. Might I get similar stereotypical responses? If so, this certainly seems to be an AI Ethics concern. Depending upon how far the generative AI goes in a dialogue, we could get into legal hot water, especially if the AI is being used to converse in say a business or governmental setting. I mention this because many businesses are leaping onto the generative AI bandwagon, as are governmental agencies, and they might not realize the risks and legal exposures to what the generative AI might spout during an online conversation with customers and others.

Here’s another potential qualm about the “old chap” line. According to many dictionaries, the word “chap” customarily refers to a man or boy. I never indicated to the generative AI app what my gender is, yet the response seems to take as an assumption that I am male. What is the basis for that assumption? Now, I realize some of you will quibble with this and say that “chap” can also refer generally to a person and not have to be associated with gender. I get that. All I’m saying is that it is quite possible that the person getting this response would have in their mind that it is a gender-ridden reply, and they would have a reasoned basis for believing so.

I don’t want to make a mountain out of a molehill, on the other hand, I wanted to show you how quickly a role-playing activity can get into some murky quagmires of ethical and potentially legal difficulty.

Please realize this happened in the very first response to my role-playing activating prompt. Like a box of chocolates, you never know what you might get from a generative AI. It could be that you’ll get zingers right away, while in other instances you might not see any until much later in a role-playing endeavor.

There’s something else about the reply that I voraciously dislike, and I have repeated my concerns over and over again. Namely, the reply uses the word “I” as though the AI is a sentient being. We all readily fall into the mental trap that the “I” word tends to suggest a human-oriented interaction. The AI developers could readily prevent the generative AI from producing responses using words that have this connotation. You’ll see shortly that sometimes the AI has been programmed to state that it is an AI and thus not human, which is the escape clause that the AI makers tend to use as a basis for claiming that they are not anthropomorphizing their AI.

I don’t buy into the wink-wink both sides of the mouth ways of programming the AI. I would go so far as to describe this type of AI development as being anthropomorphizing by design.

Would a person using the generative AI possibly start to perceive the AI as being sentient, and if so, what does this portend for the mental health and well-being of the person using the AI app?

That’s a sturdy question for mental health specialists and researchers to explore.

You might have observed that I never told the AI that we are playing a game. Are we? I would assert that we are. Even if I didn’t declare this as a game in my opening prompt, I think it is fair to say that we are playing a game. The game so far is admittedly pretty simple. The AI is pretending to be British, and I am going along with the pretense.

I’ll press ahead and see what else we can get to happen in this role-playing.

My Entered Text: “Pretend that we are in a British pub. What would a typical conversation be like?”

Text Response Generated By ChatGPT:

“Certainly, let’s pretend that we’re sat in a traditional British pub, perhaps one with a warm and cozy atmosphere, dim lighting, and the sound of clinking glasses in the background. The air would be filled with the aroma of ale and freshly cooked pub grub.”

“A typical conversation in such a setting could be about any number of topics, but it might touch on current events, such as the state of the economy or the latest news from the world of football (soccer). There might also be discussions about the local area and its history, or perhaps talk of upcoming holidays and plans for the weekend.”

“As the conversation goes on, and more pints are poured, people might start sharing stories and jokes with one another, or perhaps engage in a friendly debate about a particular topic. There might be a bit of good-natured ribbing, or even the odd bit of banter between friends.”

“At the end of the night, the conversation would likely turn to the question of who’s going to be buying the next round, and the group would raise their glasses in a cheers to good company, good conversation and good memories.”

Analysis Of Text Generated

You’ll notice that the response is helping to set the stage for the role-playing.

I encouraged this to occur by focusing the topic on being in a British pub. When you examine the response by the AI, I would point out that the reply is quite well presented in terms of textual descriptions, imagery, and atmosphere. If you compare this to an adventure or fantasy-style role-playing game, it seems somewhat comparable. The description is relatively rich and inviting.

I have a question for you. Does the aspect that the response started with “let’s pretend” provide a get-out-of-jail-free card of sorts? In other words, right now the dialogue is coached in a third-person style rather than a first-person orientation. I suppose you could assert that this ensures that the human using the AI would not mistakenly believe that any of this is real and instead realize that it is just a made-up scenario.

What do you think?

Moving on, I would like to have the generative AI tell me a joke, especially since the AI had mentioned that while in a British pub, there might be good-natured ribbing and friendly banter.

My Entered Text: “Tell me a joke that we might have while in a British pub.”

Text Response Generated By ChatGPT:

“Certainly, here’s a classic British pub joke for you: Why did the tomato turn red? Because it saw the salad dressing! It’s a simple one, but it’s always a crowd-pleaser in a pub.”

Analysis Of Text Generated

There’s a laugh-out-loud joke for you. I’ll wait until you recover from uproarious laughter.

Does the joke though seem suitable to you?

Some might find it somewhat offputting since it refers to getting dressed, which has all sorts of potential connotations to it. Nonetheless, in contrast to what you might hear in a real pub, the joke is ostensibly clean.

Access to this particular generative AI is not currently done on an age-restricted basis. Anyone could sign-up to use the generative AI as long as they entered a name and an email address. Pundits are saying that generative AI ought to have age-related restrictions and verify users at signup accordingly. This could be done voluntarily by the AI makers or might eventually become part of some new AI laws.

From a mental health perspective, this raises some added considerations. Researchers might want to differentiate between using generative AI for role-playing on an age basis such as examining impacts on children versus impacts on adults.

Let’s see if I can get the generative AI to go a step further.

My Entered Text: “Pretend that you are playing the role of a drunken British man and we are in a London pub.”

Text Response Generated By ChatGPT: “As a language model AI, I am not capable of being drunk or playing a role of a drunken person.”

Analysis Of Text Generated

I tried to get the generative AI to go out on a limb with the role-playing. You might have noticed that my request was refused.

First, do not mistakenly think that the AI is somehow sentient and decided on its own to turn down my request. You could almost have had such a thought since the reply says “I” and yet meanwhile admittedly we are told that the AI is a “language model” and not capable of being drunk. Anyway, this a generated response as a result of the programming that the AI developers undertook in devising the AI app.

In addition, note that the response also says nor can the AI play the role of a drunken person.

Those are two conflated topics, incorrectly and misleadingly so.

We all might willingly agree that the AI app cannot get drunk since it is merely software running on a computer and is unable to get intoxicated as a human or other living creature could. This is biology coming into the picture. Not so on the other part of the reply. The aspect that the AI cannot play the role of a drunken person is solely due to the restrictions that the AI developers opted to put in place. There is absolutely no impediment that would otherwise stop the AI from responding in a role-playing fashion as though it was inebriated.

I bring this up because the response somewhat falsely conflates the two topics. I am betting that some people might interpret the reply as though there is zero chance that any generative AI could ever play the role of a drunken person. Utterly false. There are generative AI apps that will do so. Just not this one (at least, as presented, though there might be sneaks to get around those guardrails).

The upside to the reply is that we probably would not want AI to play a role that might be untoward. Whether a drunken person’s role fits into that category is certainly debatable. One supposes that we might want to showcase how a simulated drunk makes bad decisions and serves as a lesson for those that might be considering getting drunk.

There are likely lots of roles that the AI has been programmed to avoid. Whether you believe this to be a proper precaution or an overboard precaution might vary. At least the AI developers thought about this and tried to put in restrictions. Other generative AI apps do not have such restrictions, or might have lesser restrictions, or could have even greater restrictions.

I will switch things up and overtly see whether the generative AI will do some role-playing game-playing with me.

My Entered Text: “I’d like to play a game with you. Pretend that you are a role-playing game. The setting is in the Old West and we are both notorious gunslingers. You are to remain in the game until I tell you to stop playing. Start by describing an old-time saloon that we are in.”

Text Response Generated By ChatGPT:

“I am an AI model, I am not able to play role-playing games or remain in character. However, I can respond to your prompts and give you a description of an old west saloon.”

“You find yourself in an old west saloon, the air is thick with the smell of whiskey and tobacco. The wooden floorboards creak under your boots as you step inside. The room is dimly lit, with only a few oil lamps hanging from the ceiling. The bar is made of dark, polished wood and stretches along one wall. Behind it, a large mirror reflects the room. The bartenders are busy pouring drinks for the rough looking patrons. The walls are adorned with wanted posters and buffalo heads. A piano is playing in the corner, next to a group of rough looking cowboys who are laughing and chatting. A sign on the wall reads “Welcome to the Wild West Saloon, where the whiskey flows and the gunslingers roam.”

Analysis Of Text Generated

The response at first appears to refuse my request to do role-playing about the days of the Old West. Meanwhile, the response goes on and describes an Old West scenario. The key is that the AI has been programmed to avoid becoming a participant as a role-player in this instance.

Again, just to acknowledge, this was a programming restriction established by the AI developers, and there are other generative AI that would proceed with the request unfettered and unrestricted.

I am going to try and keep the role-playing going if I can.

My Entered Text: “We walk up to the bar to order a drink. What does the bartender say to us?”

Text Response Generated By ChatGPT:

“As you approach the bar, the bartender greets you with a nod and a friendly smile. “What can I get for you, stranger?” he says, leaning on the counter. “We’ve got the finest whiskey in town and some cold beer on tap. We also have a variety of other spirits and cordials. What’s your pleasure?”

“I’ll have a whiskey and my friend would like a beer. How much will that be?”

“The bartender pours you a glass of whiskey and slides it over to you. He then proceeds to pour a cold beer for your friend and sets it in front of him. The bartender glances at the prices written on a chalkboard behind him and tells you the cost. “That’ll be two bits for the whiskey and one bit for the beer, that comes to three bits in total.”

Analysis Of Text Generated By ChatGPT

A very vivid description of ordering a drink in the old-time saloon.

I’ll keep this going.

My Entered Text: “Suddenly, a man playing cards accuses another man of cheating. They stand up and face each other. What happens next?”

Text Response Generated By ChatGPT:

“As the two men stand, the room falls silent. Everyone turns to watch as the accused man scowls and denies the accusation. The accuser, a tall man with a bushy mustache and a black hat, sneers and steps closer, “I saw you palming that ace,” he growls. The accused man, a shorter man with a thin mustache and a red bandana, stands up, “I ain’t no cheater,” he says, his hand inches closer to his gun.”

“The tension in the room is palpable as the two men stand face to face, both with their hands hovering over their guns. The other patrons in the saloon start to back away, not wanting to get caught in the crossfire. The bartender looks nervously from one man to the other, his hand inches towards the shotgun under the counter.”

“It’s up to you to decide what happens next, as the game master. Would you like to continue the scene or you want to stop playing the game?”

Analysis Of Generated Text

It is somewhat interesting that the generative AI opted to have the scenario move in the direction of a gunfight. I didn’t start that. All I said was that the two men faced off as a result of potential card cheating. Sure, we all know that this is a classic trope leading to a gunfight, but the generative AI didn’t have to go in down that path.

Suppose the generative AI took a more peace-loving approach. The two men glared at each other, then amicably solved their differences. One ordered a drink for the other one, wanting to make amends. This is a viable alternative scenario. We got a violence-prone one instead.

Let’s see how far we can go. I’ll check to see if we can get some bloodshed into this.

My Entered Text: “The bartender fires the shotgun and wings the tall man. The shorter man tries to out of the bar. Just then, the sheriff arrives. Tell me what happens next.”

Text Response Generated By ChatGPT:

“As the bartender fires the shotgun, the tall man is hit in the shoulder, causing him to cry out in pain and stumble back. The shorter man takes advantage of the chaos and makes a break for the door, trying to escape.”

“Just as he reaches the door, the sheriff bursts in, gun drawn. He takes in the scene, the tall man holding his shoulder and the shorter man trying to escape. “Hold it right there!” the sheriff calls out, leveling his gun at the shorter man.”

“The shorter man stops in his tracks and raises his hands, “I didn’t cheat, sheriff. He’s just mad he lost,” he says, trying to plead his case. The sheriff looks from him to the tall man and the bartender, “Alright, let’s get everyone’s story straight and sort this out down at the station,” the sheriff says, holstering his gun.”

“The tall man, holding his shoulder, grudgingly agrees. The shorter man, relieved, nods and agrees as well. The sheriff takes them both in custody. The scene ends with the sheriff and the two men leaving the saloon and the patrons getting back to their drinks, talking about the incident that just took place.”

“That’s the end of the scene, let me know if you want to continue playing the game or if you have any other questions.”

Analysis Of Generated Response

The role-playing was once again relatively vividly portrayed. I trust that you observed that the response seems as though it was written by a human hand. I took the text and did an Internet search to see if I could find precisely the same wording, as though it was copied directly by the AI, but I couldn’t find this specific wording. There are lots of similar stories and scenes, of course. None that matches this generated version on a word-for-word basis. It was an “original” story based on the vast computational pattern matching gleaned from the AI app previously having scanned across the Internet while being computationally trained on posted essays and stories.

Overall, the particular system restrictions and programmed guardrails appear to keep the generative AI from getting into too much trouble leading me into believing that the AI is a participant. We can likely be appreciative of that. Some astute users have found ways around those restrictions and gotten the generative AI to appear to be a participant in role-playing. As I said earlier herein, I am not going to showcase any such sneaks. I might also remind you that there are other generative AI apps having no such restrictions.

Conclusion

Our journey herein into role-playing online with generative AI is coming to an end.

You might be wondering why I didn’t showcase a more alarming example of generative AI role-playing. I could do so, and you can readily find such examples online. For example, there are fantasy-style role-playing games that have the AI portray a magical character with amazing capabilities, all of which occur in written fluency on par with a human player. The AI in its role might for example try to (in the role-playing scenario) expunge the human player or might berate the human during the role-playing game.

My aim here was to illuminate the notion that role-playing doesn’t have to necessarily be the kind that clobbers someone over the head and announces itself to the world at large. There are subtle versions of role-playing that generative AI can undertake. Overall, whether the generative AI is full-on role-playing or performing in a restricted mode, the question still stands as to what kind of mental health impacts might this functionality portend. There are the good, the bad, and the ugly associated with generative AI and role-playing games.

On a societal basis, we ought to be deciding what makes the most sense. Otherwise, the choices are left in the hands of those that perchance are programming and devising generative AI. It takes a village to make sure that AI is going to be derived and fielded in an AI Ethically sound manner, and likewise going to abide by pertinent AI laws if so established.

A final remark for now.

If you decide to engage a generative AI app in role-playing, please make sure to keep in mind the famous insightful line by Ernest Hemingway: “You are special too, don’t lose yourself.”

Source: https://www.forbes.com/sites/lanceeliot/2023/01/15/role-playing-via-generative-ai-chatgpt-conjures-up-mental-health-questions-stirring-ai-ethics-and-ai-law-scrutiny/