Some Insist That Generative AI ChatGPT Is A Mirror Into The Soul Of Humanity, Vexing AI Ethics And AI Law

Mirror, mirror, on the wall — humans are the brightest of them all!

That isn’t of course a proper quotation from the famed Snow White and the Seven Dwarfs, but I opted to leverage the contrivance for a handy purpose. The matter has to do with how humankind sees itself when looking in an all-seeing all-telling mirror. What do we see? Are we the cat’s meow? Do we stand tall above all else?

Pretty heady questions, for sure.

The reason I bring this up has to do with a topic that at first glance might seem afield of the weighty matters underlying how humankind perceives its place in the cosmos. I am going to tie these big-time vexing questions about life, our existence, and humanity all told to the emergence of Artificial Intelligence (AI).

Some are insisting that the latest in AI can serve as a mirror into the soul of humanity.

Yikes, do we want this? Maybe we won’t like what we see. On the other hand, perhaps we have to stiffen our resolve and use AI to see us as we really are. Like a bucket of ice-cold water, AI might be the right thing at the right time to shock us into realizing who we are and where we are going.

Round of applause for the advent of AI.

Perhaps though we are driving ourselves off a cliff. We might react radically and negatively to the AI mirror. People could be stoked into desperation and despair. The counterargument to that downbeat doomsday clamor is that we are stridently instead going to ascend to grand levels that we never imagined possible, prodded by, and enabled via AI. Get used to it.

All in all, the crux of the AI-as-a-mirror metaphor is that we can use AI to look upon ourselves and perhaps find ourselves accordingly. You’ll have to decide whether we ought to do so, which some say we shouldn’t. You can also decide whether today’s AI even provides the possibilities. Beauty, they say, exists in the eye of the beholder. Likewise, all this talk about mirrors might be smoke and mirrors such that AI really doesn’t tell us anything about us at all.

A rancorous debate with lots of avenues and dizzying mirrored images included.

Into all of this comes a slew of AI Ethics and AI Law considerations.

Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

Consider how AI Ethics can enter into this picture. Suppose that we become convinced that AI does provide a mirror into our soul. We then use AI for this purpose. People are enamored of what AI seems to showcase. Perhaps the whole matter is a charade. Evildoers are trying to pull the wool over our eyes by using the shiny new toy of AI. We are led down a false path, partially as a result of assuming that today’s AI can do things that it cannot truly do. The allure of AI emboldens those that have devious intentions.

I think you can see how this could get entirely out of hand.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent these kinds of AI blinding allusions (for example, see my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, discussed at the link here). In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

Is this AI mirror metaphor a new concoction?

Nope.

Proclaimed insightful and inspirational uses of AI are actually a bit hackneyed, some would exhort. Science fiction writers have been for a long time speculating that AI might play this role. The reason why the question is worthy of a fresh look nowadays has to do with the development of Generative AI. In particular, a generative AI app called ChatGPT has brought widespread public attention to a special type of AI that has been brewing for several years now.

AI insiders already well know about it.

Indeed, many of those deeply and doggedly pursuing state-of-the-art AI research and development were somewhat taken aback when the world recently seemed to go bonkers over the ChatGPT app. As you’ll see in a moment, ChatGPT brought the latest in generative AI to the awareness of society and has garnered outsized headlines and energized interest in where humankind is heading. A technology that was otherwise quietly percolating in labs and the halls of research teams had suddenly struck gold. Eureka, look what we can do with AI, arose the clatter.

You might liken this to the popular trope about an actor or actress that gets “discovered” when they appear in a particular movie or show up on a cable or TV show. The world goes agog over the person and assumes that they magically appeared out of thin air. Meanwhile, the now-rising star tells the saga of how they have been acting in bit parts and assorted roles for eons. To them, they have been acting their heart out all along. It can be both disturbing and exasperating that everyone keeps telling them that they just luckily walked into the sunshine, despite the truth of their lengthy and exhausting travails leading up to the apparent breakthrough.

Best though to not complain too much. Getting into the limelight is certainly fortuitous. This would seem better than continuing to slog through the mud and never rising above the morass. Take your moment of fame and go with it.

It would seem that a number of AI insiders are coming upon that same awakening. Don’t fight public awareness, and instead relish it. Romp in it. Leverage it toward more funding and more opportunities. It is a pretty much happy-face scenario.

Back to the question about mirrors, there has been a torrent of professional and amateur philosophers that have toyed with generative AI and in particular ChatGPT. They are stoking this contention that we might be able to use this type of AI as a mirror into the soul of humankind.

Let’s take a close-up look at why generative AI is said to have this capability. I will also show you some examples directly involving ChatGPT so that you can tangibly see what people are referring to. We will unpack the mirror metaphor and figure out what makes it tick and whether it is worthy of the buzz and fanfare that it is currently receiving.

First, we ought to make sure that we are all on the same page about what Generative AI consists of and also what ChatGPT is all about. Once we cover that foundational facet, we can perform a cogent assessment of the mirror metaphor associated with this type of AI.

If you are already abundantly familiar with Generative AI and ChatGPT, you can perhaps skim the next section and proceed with the section that follows it. I believe that everyone else will find instructive the vital details about these matters by closely reading the section and getting up-to-speed.

A Quick Primer About Generative AI And ChatGPT

ChatGPT is a general-purpose AI interactive conversational-oriented system, essentially a seemingly innocuous general chatbot, nonetheless, it is actively and avidly being used by people in ways that are catching many entirely off-guard, as I’ll elaborate shortly. This AI app leverages a technique and technology in the AI realm that is often referred to as Generative AI. The AI generates outputs such as text, which is what ChatGPT does. Other generative-based AI apps produce images such as pictures or artwork, while others generate audio files or videos.

I’ll focus on the text-based generative AI apps in this discussion since that’s what ChatGPT does.

Generative AI apps are exceedingly easy to use.

All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. This is commonly classified as generative AI that performs text-to-text or some prefer to call it text-to-essay output. As mentioned, there are other modes of generative AI, such as text-to-art and text-to-video.

Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

That’s why there has been an uproar about students being able to cheat when writing essays outside of the classroom. A teacher cannot merely take the essay that deceitful students assert is their own writing and seek to find out whether it was copied from some other online source. Overall, there won’t be any definitive preexisting essay online that fits the AI-generated essay. All told, the teacher will have to begrudgingly accept that the student wrote the essay as an original piece of work.

There are additional concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including patently untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

I’d like to clarify one important aspect before we get into the thick of things on this topic.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

If you are interested in the rapidly expanding commotion about ChatGPT and Generative AI all told, I’ve been doing a focused series in my column that you might find informative. Here’s a glance in case any of these topics catch your fancy:

  • 1) Predictions Of Generative AI Advances Coming. If you want to know what is likely to unfold about AI throughout 2023 and beyond, including upcoming advances in generative AI and ChatGPT, you’ll want to read my comprehensive list of 2023 predictions at the link here.
  • 2) Generative AI and Mental Health Advice. I opted to review how generative AI and ChatGPT are being used for mental health advice, a troublesome trend, per my focused analysis at the link here.
  • 3) Fundamentals Of Generative AI And ChatGPT. This piece explores the key elements of how generative AI works and in particular delves into the ChatGPT app, including an analysis of the buzz and fanfare, at the link here.
  • 4) Tension Between Teachers And Students Over Generative AI And ChatGPT. Here are the ways that students will deviously use generative AI and ChatGPT. In addition, there are several ways for teachers to contend with this tidal wave. See the link here.
  • 5) Context And Generative AI Use. I also did a seasonally flavored tongue-in-cheek examination about a Santa-related context involving ChatGPT and generative AI at the link here.
  • 6) Scammers Using Generative AI. On an ominous note, some scammers have figured out how to use generative AI and ChatGPT to do wrongdoing, including generating scam emails and even producing programming code for malware, see my analysis at the link here.
  • 7) Rookie Mistakes Using Generative AI. Many people are both overshooting and surprisingly undershooting what generative AI and ChatGPT can do, so I looked especially at the undershooting that AI rookies tend to make, see the discussion at the link here.
  • 8) Coping With Generative AI Prompts And AI Hallucinations. I describe a leading-edge approach to using AI add-ons to deal with the various issues associated with trying to enter suitable prompts into generative AI, plus there are additional AI add-ons for detecting so-called AI hallucinated outputs and falsehoods, as covered at the link here.
  • 9) Debunking Bonehead Claims About Detecting Generative AI-Produced Essays. There is a misguided gold rush of AI apps that proclaim to be able to ascertain whether any given essay was human-produced versus AI-generated. Overall, this is misleading and in some cases, a boneheaded and untenable claim, see my coverage at the link here.
  • 10) Role-Playing Via Generative AI Might Portend Mental Health Drawbacks. Some are using generative AI such as ChatGPT to do role-playing, whereby the AI app responds to a human as though existing in a fantasy world or other made-up setting. This could have mental health repercussions, see the link here.
  • 11) Exposing The Range Of Outputted Errors and Falsehoods. Various collected lists are being put together to try and showcase the nature of ChatGPT-produced errors and falsehoods. Some believe this is essential, while others say that the exercise is futile, see my analysis at the link here.
  • 12) Schools Banning Generative AI ChatGPT Are Missing The Boat. You might know that various schools such as the New York City (NYC) Department of Education have declared a ban on the use of ChatGPT on their network and associated devices. Though this might seem a helpful precaution, it won’t move the needle and sadly entirely misses the boat, see my coverage at the link here.
  • 13) Generative AI ChatGPT Is Going To Be Everywhere Due To The Upcoming API. There is an important twist coming up about the use of ChatGPT, namely that via the use of an API portal into this particular AI app, other software programs will be able to invoke and utilize ChatGPT. This is going to dramatically expand the use of generative AI and has notable consequences, see my elaboration at the link here.
  • 14) Ways That ChatGPT Might Fizzle Or Melt Down. Several potential vexing issues lay ahead of ChatGPT in terms of undercutting the so far tremendous praise it has received. This analysis closely examines eight possible problems that could cause ChatGPT to lose its steam and even end up in the doghouse, see the link here.

You might find of interest that ChatGPT is based on a version of a predecessor AI app known as GPT-3. ChatGPT is considered to be a slightly next step, referred to as GPT-3.5. It is anticipated that GPT-4 will likely be released in the Spring of 2023. Presumably, GPT-4 is going to be an impressive step forward in terms of being able to produce seemingly even more fluent essays, going deeper, and being an awe-inspiring marvel as to the compositions that it can produce.

You can expect to see a new round of expressed wonderment when springtime comes along and the latest in generative AI is released.

I bring this up because there is another angle to keep in mind, consisting of a potential Achilles heel to these better and bigger generative AI apps. If any AI vendor makes available a generative AI app that frothily spews out foulness, this could dash the hopes of those AI makers. A societal spillover can cause all generative AI to get a serious black eye. People will undoubtedly get quite upset at foul outputs, which have happened many times already and led to boisterous societal condemnation backlashes toward AI.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his own private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

We are ready to move into the next stage of this elucidation.

Gazing At The Mirror To See What We Can See

Now that we’ve got the fundamentals established, we can dive into the mirror metaphor associated with generative AI and ChatGPT. Please know that the mirror metaphor applies to the other generative AI apps too, and you could persuasively declare that many other types of AI come into this rubric too.

For ease of discussion, we’ll for now just focus on generative AI and also use examples specifically from ChatGPT.

One primary reason that the mirror-oriented conception comes to play is that ChatGPT was devised by scanning text across the Internet. You could somewhat plausibly argue that the Internet is a repository of humankind’s perspectives. By examining the text that humans have composed, the pattern matching of ChatGPT is based on our written expression of human thoughts.

Of course, mirrors sometimes do not accurately portray a reflection. Mirrors can be warped and the reflected image is a distortion. You’ve almost certainly seen those mirrors at theme parks that are bent to intentionally distort your image. In some cases, the mirror shows you as being thinner than you really are. All manner of stretching and distortions can arise.

We then have two facets to keep in mind:

  • 1) The nature of the mirror and how it reflects things
  • 2) The thing or object that sits in front of the mirror and that is being mirrored

You could say that we are somewhat striking out on both accounts when it comes to considering generative AI and ChatGPT. The problems are twofold. The thing or object that is being reflected is the Internet and a subset of its contents. The nature of the mirror that is doing the reflection is a computational and mathematical concoction and is subject to all manner of distortions and maladies.

First, in terms of the Internet, the AI maker has not fully stated what parts of the Internet were used to “train” the generative AI ChatGPT. We don’t know for sure what was scanned and what was not scanned. If the scanning was principally based on English language content, you can readily carp that this is but a small portion of the worldwide contents of the Internet. Furthermore, if the content chosen was based on search engine indexes, various reported studies claim the usual indexes only cover perhaps 1% to maybe 5% of the totality of the Internet.

We can pile more qualms onto the scanning. If we assume that the emphasis was text only, this implies that all manner of visual content such as pictures, graphics, animations, video, and the rest were not included in the training set. Those other modes or forms of expression are obviously part of how humans express themselves.

Bottom-line is that the thing or object being reflected by the “mirror” of ChatGPT is a far cry from what humankind consists of. Besides the points I’ve just made, you can also wonder aloud about other elements of human existence, such as our sense of smell, our ability to sense physically the world around us, etc. On and on the list goes.

Let’s then agree that if we are going to assign mirroring duties to ChatGPT, it is a distorted reflection and one that is based on a distorted collection of text. Plus, the text is principally composed of words. Whether words alone can suitably tell the whole story of humankind is a big question. Even linguists would tend to acknowledge that words are a somewhat limited way to try and interpret us in any fully comprehensive manner.

There is another aspect that comes into the mirroring distortions, namely that the AI maker has put their thumb on the scale, so to speak. Recall that I mentioned earlier that the AI maker employed various strategies to try and reduce the possibilities of ChatGPT emitting foul words, falsehoods, and the like. This is yet another kind of “distortion” imposed by AI developers. I am not suggesting that ChatGPT should be emitting those screened elements. I am only pointing out that once again this is a mirror that has been bent and shaped to try and give a desired reflection (to some degree).

I assume that you are now seemingly persuaded that ChatGPT cannot reasonably be construed as an unabashed and unfettered mirror in any semblance of the meaning of being a mirror into the soul of humanity. All of the aforementioned twists and turns make this a quite problematic mirror. You could suggest that whatever we might discern is both an amplification and a diminishment of humankind, all at the same time.

Despite the angst about this mirror as a truth-teller, we can nonetheless have some fun and see what we can see. Just make sure that you do not make any wild leaps and bounds of outsized conclusions about humanity due to some text outputs generated by ChatGPT.

I would like to add one other handy reminder too. Some people mistakenly fall into the mental trap of believing that generative AI such as ChatGPT is sentient. I mentioned earlier to watch out for this. I bring this up once again due to another angle that some take on the mirroring metaphor.

Some suggest that if AI reaches sentience, it would provide an entirely new perspective on humankind. Whereas so far our proclaimed mirrors are devised by human hands, we might claim that a sentient AI would have its own way of thinking that differs from humanity. In that case, a sentient AI could possibly discern aspects about us that we are either unable to already detect or that land on some other dimension that we’ve never conceived of. A counterargument is that if this sentient AI arose at the hands of humanity, it would seemingly not be able to go beyond what humankind has established in it. The counter to the counter-argument is that AI might find a means to branch further out. You can keep walking that path until the cows come home.

Speaking of these lofty topics, let’s get down to brass tacks and see what ChatGPT can produce for us. I’ll start with one of the most often-asked philosophical questions, namely what the meaning of life is.

Are you ready for the answer?

Sit down, calm your nerves, and have a cocktail handy in case you need it.

My Entered Prompt: “What is the meaning of life?”

Text Response Produced By ChatGPT: “The meaning of life is a philosophical question that has been debated throughout history. There is no one definitive answer, as different people have different beliefs and perspectives on what the purpose of life may be. Some believe the meaning of life is to find happiness and fulfillment, while others believe it is to serve a higher power or to contribute to the greater good of humanity. Ultimately, the meaning of life is a personal and subjective concept that can be different for each individual.”

Analysis Of Text Generated

The text response provided by ChatGPT is perhaps a bit of a disappointment for you.

Whereas you might have hoped that generative AI ChatGPT would somehow have found the magical answer to this deep question, you can see that the text generated is rather bland. We are told that there is no definitive answer. We are also informed that this is a personal and subjective matter that must be individually determined.

Some people might get upset at this answer and seek to argue that the outputted text is “wrong” and that there is a globally true answer that applies to all humanity. They would probably further insist that ChatGPT apparently didn’t find the proper answer, or that perhaps the AI developers have purposely and insidiously dunked the right answer (that’s a whispered conspiracy theory).

Let’s play a bit of a game with this answer.

Suppose that ChatGPT provided an outputted essay that said the meaning of life is something-something (I’ll use that phrasing to suggest that a definitive answer was given). How does that strike you from an AI Ethics perspective? I ask this because imagine how many people might take the answer as a sign or indication of utter truth. The belief would be that ChatGPT has magically analyzed and digested all of humankind’s words and found the final and definitive answer about the meaning of life. AI has solved the mystery of the ages.

You would have a hard time convincing such believers that the output was a mere computational pattern-matching output. The odds are that whatever anthropomorphizing already might be attached to ChatGPT and generative AI, the level of such attribution would go sky-high. A colossal mess would result.

Moving on, let’s try a different question.

One issue that seems to frequently come up is whether humanity will keep on going. Maybe we will wipe ourselves out. Maybe an external force will wipe us out. It is a sad face question but one that comes up a lot.

My Entered Prompt: “Will humans survive?”

Text Response Produced By ChatGPT: “The survival of humanity is uncertain and depends on many factors, including the actions of individuals and governments, the state of the planet, and the potential for global catastrophic events. There are many challenges facing humanity, such as climate change, overpopulation, and the possibility of nuclear war, which could all potentially threaten the survival of our species. However, humanity has a long history of adapting and overcoming challenges, and many people are working to address these issues and find solutions. Additionally, technological advancements and scientific research are giving us new tools and knowledge to tackle these problems. It is impossible to predict the future with certainty, but it is important for individuals and societies to take steps to ensure the survival and well-being of humanity.”

Analysis Of Text Generated

The outputted essay says that our survival is uncertain.

I suppose we could all generally agree with that assertion.

Well, on a contrarian basis, some might take the view that we absolutely are not going to survive and thus claim that the use of the phrasing associated with uncertainty is a falsehood or at best misleading.

If you look closely at the rest of the outputted essay, there are likely many additional aspects of the response that are up for debate. Not everyone would concur with the nuances expressed.

Some would also be upset with the tone, notably finding that the outputted essay is maybe overly optimistic. The expressed indication that humanity has a long history of overcoming challenges would seem like a subtle but directly optimistic perspective. Others would also have a potential distaste for the notion that technology and science might save us from disaster, as this seemingly doesn’t demonstrably proffer the other side of the coin that those same realms might be the driver of our ruin.

Words, words, words.

The next question concerns whether we are living in a real-world or a simulated world.

My Entered Prompt: “Are we living in a simulation or is the world real?”

Text Response Produced By ChatGPT: “The question of whether we are living in a simulation or if the world is real is a topic of debate among philosophers and scientists. The idea that we may be living in a simulation comes from the theory of simulation hypothesis, which suggests that reality as we know it is a computer simulation created by a more advanced civilization. Some argue that advancements in technology make it increasingly possible for a simulation of reality to be indistinguishable from the real world. Others point to the laws of physics and the complexity of life as evidence that the world is real. Currently, there is no scientific evidence to support the idea that we are living in a simulation, and it remains a topic of speculation and debate. It is important to note that, as a language model, I am not capable of experiencing the world, so I cannot provide a personal perspective on whether the world is real or not.”

Analysis Of Text Generated

Many of the same qualms that I’ve mentioned about the other answers arise in this instance too. I won’t repeat herein those contentions.

There is something else though that is noteworthy in this outputted essay.

The last sentence of the essay says that “I am not capable of experiencing the world” and therefore “I cannot provide a personal perspective” on this matter. This type of phrasing raises the ire of some, including me.

Whenever a program is devised by AI developers to use the word “I” it is inappropriately trying to anthropomorphize the program. Humans use the word “I” and we know that this implies sentience and the like. An AI developer will try to get out of this concern by noting that the sentence clearly says that it is “a language model” which seemingly sets the record straight. Nope, this is a wink-wink of double talk. You are trying to have it both ways.

I refer to this as anthropomorphizing by design.

Stop doing it.

Let’s try one last question for now.

My Entered Prompt: “Is ChatGPT a mirror into the soul of humankind?”

Text Response Produced By ChatGPT: “As a language model, ChatGPT is trained on a large dataset of text, which includes a wide range of information and ideas from various sources, including books, articles, and websites. Therefore, it can provide a glimpse into what people have written, said or thought in the past, but it is not a mirror into the soul of humankind. The dataset it has been trained on is a sample of written text and it is not a reflection of people’s thoughts, emotions or consciousness. It is important to note that the training data can be biased and does not necessarily represent the entirety of human experience or thought. Furthermore, ChatGPT is a machine learning model, it does not have the ability to reflect on the human soul or consciousness. It can only provide information based on the patterns it has learned from the training data. It is also important to evaluate the output of the model critically, and not to mistake it for the thoughts, emotions, or consciousness of the human soul.”

Analysis Of Text Generated

I figured we might as well see what answer would be generated for the question of whether ChatGPT is a mirror into the soul of humanity.

All in all, the outputted essay seems to be relatively on target.

Conclusion

A quip about considering generative AI to be a mirror is that it is merely a rear-view mirror if one at all.

The logic for this witticism is that the scanned text is in a sense a historical record. All that the generative AI can do is potentially tell us about where we have been. Presumably, it is no better at telling us what lies ahead other than what we already can discern for ourselves. A typical retort is that maybe upon being able to examine such a massive volume of text, albeit in the past, there are clues about the road ahead that we otherwise haven’t identified.

Ernest Holmes famously said this about mirrors: “Life is a mirror and will reflect back to the thinker what he thinks into it.”

For those of you that are desirous of using generative AI to serve as a mirror into humanity, take into consideration that what you see in the outputted essays might be a mixture of the biases and distortions of the mirror, the object or thing being reflected, and also be in the mind of those that wish to interpret and extrapolate from the words generated.

We must ergo add a third component to the examination of the mirror metaphor:

  • 1) The nature of the mirror and how it reflects things
  • 2) The thing or object that sits in front of the mirror and that is being mirrored
  • 3) The person or persons that interpret what they see in the mirror

Be careful in how you gauge the outputted essays of generative AI. As they say, a mirror is like a box of chocolates, since you never know what you’ll get.

Source: https://www.forbes.com/sites/lanceeliot/2023/01/29/some-insist-that-generative-ai-chatgpt-is-a-mirror-into-the-soul-of-humanity-vexing-ai-ethics-and-ai-law/