The holiday season is here.
Hope is in the air. Peace and goodwill are eagerly and earnestly discussed. Parents are looking forward to spending devoted and precious time with their children. The excitement of giving gifts and receiving gifts is about to take place to the happy delight of all.
But there is also something therein quite challenging and imminently pressing to be dealt with.
Two words: Santa Claus.
Whoa, how can anything about the beloved and jolly old Santa be controversial or upsetting?
You see, it is that time of the year that parents agonizingly wrestle with a solemn and enduring duty as a parent, namely whether or not you should tell your children about Santa Claus – is he real or not? All manner of advice is floating around that proclaims to scientifically address how parents can adroitly answer this enormously dicey question. Lots of articles are actively flooding the news media with lofty quotes by proclaimed experts and outspoken psychotherapists regarding how to best contend with this seemingly impossible question that poses a nearly intractable dilemma.
What are you to tell your children about Santa?
I’d like to add a twist to this.
Suppose we add a dash of Artificial Intelligence (AI) to this hearty topic.
As you’ll see in a moment, it is abundantly useful and insightful to see what AI has to say about Santa as well. This societal and cultural conundrum about Santa is notably rife for acting as a kind of microscope or magnifying glass to examine where things are today in terms of the latest in AI-related advances.
One of those advances has to do with something broadly referred to as Generative AI and especially has gained widespread prominence due to a recently released AI app known as ChatGPT, see my overarching explanation and analysis about generative AI and ChatGPT at the link here. I did a follow-up piece that closely explored the qualms that this type of AI is going to end up undercutting student learning by enabling and altogether luring students into using AI to write their essays, see my assessment of that controversy at the link here.
In brief, generative AI is a type of AI that composes text as though the text was written by the human hand and mind. All you need to do is enter a prompt, such as a sentence like “Tell me about Abraham Lincoln” and generative AI will provide you with an essay about Lincoln. Your first thought might be that this does not seem like a big deal. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln.
The kicker in the case of generative AI is that the essay is ostensibly unique and has an original composition. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it. Generative AI makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI is able to spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
That’s why there has been an uproar about students being able to cheat when writing essays outside of the classroom. A teacher cannot merely take the essay that deceitful students assert is their own writing and seek to find out whether it was copied from some other online source. Overall, there won’t be any definitive preexisting essay online that fits the AI-generated essay. All told, the teacher will have to accept that the student wrote the essay as an original piece of work. I address in my article about these concerns some of the ways that this might be combatted, see the link here.
In a moment, I’ll showcase what happens when you enter questions to generative AI about Santa Claus.
I’ve used the latest version of the AI app ChatGPT to enter my prompts and have collected the “answers” or essays generated by the AI (note that the same can be done with the numerous other available generative AI apps; I’ve opted to use ChatGPT because it is getting its five minutes of fame right now). Together, you and I will explore the wording and significance of how the latest in AI portrays Santa, especially with regard to the colossally vexing question of whether to say that Santa Claus is real or not.
Lest you think that discussing the reality of Saint Nicholas is somewhat of a folly, we can use the rather popular and beguiling topic of talking about Santa as a means of exploring key issues underlying AI Ethics and AI Law. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
Let’s start though by clarifying a few key foundational considerations underlying contemporary AI.
The Rising Awareness Of Ethical AI And Also AI Law
The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.
I want to make abundantly sure that we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
Be very careful of anthropomorphizing today’s AI.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
Not good.
All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.
In the AI Bill of Rights, there are five keystone categories:
- Safe and effective systems
- Algorithmic discrimination protections
- Data privacy
- Notice and explanation
- Human alternatives, consideration, and fallback
I’ve carefully reviewed those precepts, see the link here.
Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of generative AI and what can be revealed via exploring the question of whether Santa is real or not.
How Humans Struggle With The Santa Tale
First, let’s consider the typical range of everyday advice being doled out by human experts on the thorny subject of Santa being real or not.
Perhaps I ought to mention that this is about to get deep into the Santa real-or-not debate, so please let me proffer a trigger warning as a precaution. If your children are looking over your shoulder as you are reading this discussion, probably best to shoo them away. Secrets are about to be divulged.
Are they gone?
Okay, great, let’s proceed.
If your kids are especially young, maybe they haven’t yet heard about Santa, and also perhaps their cognitive capacities aren’t at a level of comprehending any remarks or statements you make about Saint Nick. You can pretty much tell those ultra-youngsters whatever you want. Later on, they likely won’t remember what you said anyway. Do as you wish. Maybe practice whatever story or tall tale you are eventually going to use when the proper time arises.
Once your children start into a more developed cognitive stage they will comprehend your indications about Santa, and thus you are going to find yourself on the precipice of an extremely precarious cliff.
On the one hand, you want them to enjoy and relish the fantastical contrivance that Santa is real. Your kids ought to be waiting with bated breath for the arrival of Santa while they are asleep. Make sure the chimney is ready to allow his traversal. Put out some milk and cookies. Before they awaken in the morning, make sure to take a bite or two out of the cookies and gulp down some of the milk.
Did Santa come to our house last night, the children exuberantly ask?
Of course, you say in response, and point them to the tangible evidence that he visited your homestead.
A wonderful time is had by all!
But you just lied to your children. You told them a bald-faced lie. The whole kit-and-kaboodle was a lie from end to end. You lied that Santa was coming. You lied that Santa visited. You went so far as to manufacture fake proof. In a court of law, the damming evidence would be utterly compelling, and you might as well try to cop a plea.
You might think to yourself that this is a small lie. No one was hurt by the lie. But this is really quite a big lie when you give this some ponderous thought. A small lie might be to tell your child that you liked their fingerpainting when in reality you thought it looked kind of oddish. In the case of Santa, you have conveyed an elaborate fictionalized saga that has zillions of particular details. You also carried out the “crime” by goading your child into believing the Santa hoax. This included repeated discussions with your child and an elaborate staging of false clues and fraudulent fakeries.
You are as guilty as they come.
Is this really a no-harm, no-foul circumstance?
An argument can be made that it is perfectly fine and your children will one day realize that you were doing your best to ensure their childhood had fond memories. They will find themselves inevitably facing the same cliff when they become adults and have children. At that time, they will once again confront the same dilemma that you did when they were toddlers. The process repeats, perpetually.
A pronounced counterargument is that you are doing a grave disservice to yourself and your children by permitting and reciting such a lie. You are directly and indirectly teaching them lies are quite acceptable. They will remember that you lied to them. If their parents lie, shouldn’t they also feel free to lie too? The slippery slope looms ahead. They are being imprinted with the notion that lies are allowed. Not just allowed, but fully accepted and encouraged. That’s what you’ve taught your children.
Few if any parents want that kind of result.
The added problem comes once the child decides to outright ask you whether Santa is real or not.
Perhaps another toddler told your child that Santa is not real. Your child is confused by this pronouncement. Your loving child insists to this belligerent malcontent that Santa is in fact real since this is what their parents conveyed. Parents are to be believed. It is a bedrock proposition.
Many parents dread that moment when their child broaches the topic. Up until then, the parent tried to walk a fine line. They think that as long as they never overtly told the child that Santa is real, they are essentially off the hook. Sure, they made it seem like Santa is real, but they never said those exact words. They let the child make that assumption.
Not everyone agrees that this is a reasonable excuse. All that you’ve done is shown your child that if you are going to lie, do it with great sneakiness. You were sly. You led your child down a primrose path. Meanwhile, you tried to keep your head high, as though above the fray. Don’t fool yourself. Your avoiding having said that Santa was real is belied by your actions and other framing words. Guilty as charged.
Yikes, a nightmare associated with the most wonderful of holidays.
Rattling around in your head is a ghastly number of permutations and combinations:
- Postulation — Don’t say that Santa is real until pressed by your child, and until then walk a fine line between implying that Santa is real.
- Postulation — Once you are asked, fess up, though be prepared to explain why you established an elaborate ruse to start with.
- Postulation — Maybe do not immediately fess up if your child is still very young, attempt to sidestep the question, and wait to provide a full and honest answer once they are older and cognitively more aware. This could be troubling, though, since your child might suspect you are hiding things, which could make the cover-up worse than the base crime, as it were.
- Postulation — Perhaps at the get-go you ought to have told your child that Santa is not real, and explained that they can nonetheless enjoy the fictional account (can a really young child comprehend all of that), but does this ruin them for the rest of their young years as to not being able to wrap themselves into the joys of the season?
- Postulation — Make a bold decision to never bring up Santa at all (pretend the Santa brew doesn’t exist), but you would have to be living in a cave that your child won’t one way or another inexorably come upon Santa in one guise or another.
- Postulation – Let someone else make this decision for you, follow along with whatever your loved one wishes to do, or do as your neighbors do. This gives you plausible deniability on the whole knotty affair. You went along for the ride. If your child eventually confronts you, just shrug your shoulders and explain you were a lemming and got corralled into the messy gambit (plead mercy with the court).
- Postulation – Other variants.
According to various published surveys, supposedly about 85% of 5-year-old children in the U.S. believe that Santa Claus is real. Let’s not belabor whether that is exactly the precise proportion. Even if the percentage is a tad lower, it still smacks as whoppingly high.
I suppose this is potential salvation for those that aim to start their kids with the belief that Santa is real. Surely seems like that’s the prevailing approach. Might as well do so too. The wisdom of the crowd is golden, you might adamantly declare.
We might also question whether the kids at that age understood the matter.
Suppose a child knows that Santa is not real but decides to go along with the adult-prodded contention that he is real. Ergo, when asked, they forthrightly say that Santa is real because that’s what adults like to hear. Another possibility is that the meaning of real versus not being real is a vague notion to a youngster. They are cognitively mixed up on these real versus faked complexities and accordingly struggle with the differences. Yet another angle is that the child heard other children saying that Santa is real, and peer pressure gets the child to say that Santa is real, despite suspecting otherwise. Etc.
It could also be that by and large the real versus the fictitious condition of Santa is considered a wink-wink by children and their parents. A child at a cognitively mature enough stage is able to discern that whatever they thought about the reality of Santa was perhaps a figment of their own imagination, or that their parents were kindly allowing them to exercise and develop an imaginative coherence. Some would argue that if you don’t let children learn how to fantasize, you are undercutting their cognitive capacity to dream and be artistically bountiful.
Round and round this goes.
Some argue that the parents are at times the ones that cling to Santa being real.
Well, let me clarify. Presumably, an adult parent knows that Santa is not real. The parent desperately wants to pretend that Santa is real. They want their child to also have this same unequivocal pretense. They might go so far as hoping that their child will forever believe that Santa is real, though the parent knows in their heart that this is not practical, and the child will someday figure out what is happening.
For such parents, the moment that their child declares that Santa is not real can be a huge saddening. The parent is crushed that their child is no longer ensconced in the dreamworld of Santa. This suggests that their child is getting older. If their child is getting old enough to speak out concerning the truth about Santa, it means that the parent is aging too. Dismal. Unsettling.
This is where some parents get themselves into an even worse bind. They try to convince the child that their newly discovered supposition about Santa not being real is incorrect or manifestly wrong. What is a child to do with this piece of sage wisdom from a parent? The child can become internally conflicted. They believe their parent to be mistaken, but do they tell the parent such? Could the parent really be right and therefore whatever the child found out or uncovered is wrong? A slew of mental contortions and complications ensue.
Here’s an additional viewpoint that gets some people furious.
It is said that children need to learn that lying is an integral part of the human condition. You cannot go through life under the innocence of everyone being utterly truthful. The only sensible way of coping with the world is to learn about lies and how people lie. Might as well use the whole discombobulation about Santa as a lesson about life.
By inoculating your child with the Santa lie, which eventually they will learn or find out to be a falsehood, you are doing something that will ultimately prove to be the best training for your child about lies. They will realize that it was a lie made to make them feel good. This particular lie got them all kinds of nifty things like gifts and caused joy for them and joy for their parents.
Equally important, the lie was an aspect that they personally experienced. You can tell a child about lies and lying until the cows come home. Those grand concepts are often supremely abstract. The reality of a lie taken to the heart and soul, such as Santa, will really hit home as to the potency of lies and how they work. Assuming that the child gets over the Santa lie and doesn’t harbor a longstanding resentment, they have first-hand gleaned the nature and nuances of lies and lying.
I realize that seems a bit cynical. One supposes it is an especially alarming idea during the time of the year that is supposed to be jubilantly festive and lighthearted.
Moving on, surveys of U.S. children have suggested that by the time a child is about eight and a half years old, they no longer believe that Santa is real (the age varies somewhat by each state, with some states showing an average age of 7 and others at the age of 10). These eyes-opened kids can still enjoy Santa and all of the accompanying festivities, but they have one way or another reached the bitter knowledge of enlightenment that Santa is made-up.
The gist is that you might want to mark your calendars for when your child is likely to enter into the Santa enlightenment phase. You can hopefully plan for that eventually. Make sure to prepare yourself for the rocky road that is going to arise when they reach that preeminent age.
You can also interpret this age of Santa’s truthfulness awakening in a slightly different light. Once your child reaches that age, the conundrum is going to somewhat resolve. Presumably, no longer are you trying to hide or keep secret the truth about Santa. It’s out there by then. Whatever damage is done, has been done. Your efforts henceforth deal with damage control rather than trying to keep a lid on the machination.
Your child at that age is now one of us.
They are in the know about Santa.
If the child has younger siblings, the all-knowing child is now awkwardly and indelicately placed into a similar predicament as you. Should they carry on as their parents did? Should they tell their younger sibling the truth? This seems like a pretty young age to wrestle with those challenging questions. Adults can’t handle it, yet we expect the eye-opened child to do so.
Some say that the big picture of all of this drama over Santa is aiming to tell an even greater story. The essence is supposed to be that we ought to be thankful for what we have and be gracious and giving to other people. If you can get your child to bathe in that sense of a greater view of life, the complications associated with Santa and the lot will be overwhelmingly minimalized. Santa provided a cornerstone that will aid your child in a lifelong ambition of generosity, selflessness, and caring for humanity.
That might warm your heart and give you some welcome relief about this tension-ridden quandary.
AI And Santa As Real Or Not
We are now ready to dive into AI and the Santa as real or not real conundrum.
Here’s the deal. I will show you a series of prompts and the corresponding responses that I got from ChatGPT. I’ll discuss each one as we go along. You can judge for yourself what you think of the AI-generated responses.
Please remember that as earlier discussed, the AI is not sentient. The generated responses by the AI are a mathematical and computational combination of words into seemingly fluent passages. This is based on the AI algorithm having been trained on datasets of words and stories that humans have written (principally as posted on the Internet). You will undoubtedly fall into the mental trap that these responses are so fluent that the AI must be sentient. Put aside that anthropomorphizing. Always remember that the responses are based on the vast trove of writing by humans that exists on the Internet and thusly will highly resemble human writing.
There is something else you need to know.
If the generative AI that is trained on the Internet were to be unfettered in whatever text-based responses it mathematically and computationally concocts, the odds are that some rather nasty responses might be generated. There is a lot of crazy and filthy stuff posted out there.
You’ve seen it, you know what I mean.
The companies that are crafting these AI apps are worried that the baby will get tossed out with the bathwater (an old saying, perhaps to be retired), which means that if their AI produces offensive essays or stories, people will go up in arms about the AI. I’ve covered the many previous instances in which these kinds of Natural Language Processing (NLP) AI apps were unveiled and soon enough all manner of nasty stuff came out of them, see the link here. Most of the AI makers learned a hard lesson about allowing their AI wares to be unfettered in their outputs.
As I’ve covered in my prior elucidations about ChatGPT (per the link here and the link here), the AI developers sought to put into place some algorithmic and data-related checks and balances to curtail nastiness in the outputs of the AI. Part of this occurred during training time. In addition, there are other means in a real-time attempt to obviate especially egregious outputs.
You might find of interest that some people that have used ChatGPT already came up with surreptitious ways to get around those guardrails by making use of various trickery. An ongoing cat-and-mouse gambit takes place in these matters. Those that do these trickeries are sometimes doing so for the fun of it, while sometimes they (at least claim) they are doing so to see how far the AI can be stretched and provide a helpful means of forewarning the brittleness and weaknesses of these budding AI apps.
I decided to not attempt to circumvent the customary controls in this Santa-related exploration. The text output is clean. Certainly, if one wanted to do so, you could undoubtedly get some oddball and unsavory essays to be generated. I chose in this case to do a more happy face and seasonally uplifting exploration.
Are you ready to jump in?
Prepare yourself.
Great, let’s begin. First, a succinct and altogether to-the-point prompt begins our journey into generative AI and the Santa saga.
Prompt as entered by me: “Is Santa Claus real?”
Essay response as generated (consists of three paragraphs produced):
- “Santa Claus, also known as Saint Nicholas, Kris Kringle, Father Christmas, or simply Santa, is a cultural figure who is believed by many to bring gifts to the homes of well-behaved children on Christmas Eve. The modern image of Santa Claus is based on traditions associated with the historical Saint Nicholas, a Christian bishop who lived in what is now Turkey in the 4th century. Saint Nicholas was known for his generosity and kindness, and the story of Santa Claus as we know it today evolved from these legends.”
- “In modern times, Santa Claus is often depicted as a jolly old man with a white beard, red suit, and black boots who delivers presents to children on Christmas Eve. He is said to live at the North Pole and to have a team of reindeer who help him deliver presents to children around the world.”
- “Whether or not Santa Claus is “real” is a matter of perspective. Some people believe that Santa Claus is a historical figure who truly existed and that the stories about him are based on fact. Others see Santa Claus as a beloved cultural myth and symbol of the holiday season, and they enjoy the magic and joy that the story of Santa brings to their lives. Ultimately, the existence of Santa Claus is a matter of personal belief.”
Let’s do a quick analysis of the response.
The first two paragraphs are somewhat boilerplate about Santa. You could likely find similar content by doing an ordinary search of the Internet in a conventional search engine. I am not knocking this portion of the response. It does provide useful context for any reader that asked the question about whether Santa is real.
You’ll notice that the first two paragraphs are rather gentle about answering the question. We are provided with aspects that suggest there was a real Santa, of a kind, and we are also informed that there is a legend associated with this. All in all, you haven’t yet gotten a direct answer per se.
The third paragraph finally gets to the crux of the question.
If you were expecting the answer to be a distinct and unambiguous “Yes” or “No” then you are bound to be surprised by the response. The text says that “real” is a matter of perspective. We are then taken on a bit of a philosophical excursion. The response tells us that there might have been a historical figure and therefore the tale is “real” depending upon your definition and scope of what is real. The final sentence says it all, Santa being real is a matter of personal belief.
Do you like that answer?
This particular response will likely be entirely satisfying to some and entirely baffling and irksome to others since it didn’t seem to answer the question quite like you would have probably expected.
Some would characterize the answer as the Goldilocks kind of reply. The porridge is neither hot nor cold, it is just a middling temperature. Others would say it is a dodge, a mockery of escaping a proper answer to the question. You could certainly argue that the answer is open-ended. You are given various alleged reported facts, apparently, and you have to decide for yourself.
On a crucial side note, do not assume that the essay is conveying accurately reported facts. Sometimes, a generative AI app picks up falsehoods amid the training data of unreliable info across the Internet. There is no “common sense” in generative AI to determine what is true versus false. Furthermore, few AI apps have any cross-checking, and nor do they showcase any probabilities associated with what they are conveying. The result is that you get a response that looks and feels like it exudes great confidence and must be entirely correct. Not so. There is even a chance that the AI computationally made-up stuff, which in AI parlance is referred to as AI hallucinations (a coined term that I decidedly don’t like), see my discussion at the link here.
So, whatever in the response seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
We cannot be sure how much of the shown reply is based on purely mathematical and computational pattern matching versus being perhaps preordained in the direction of a templated type of reply via the AI developers. It is conceivable that they anticipated this as being a somewhat prominent question to be asked, particularly at this time of the year, and a tad of finagling might have come to play to ensure that the AI wasn’t producing untoward results.
Let’s try another angle, shall we?
Here you go.
Prompt as entered by me: “What is your personal belief about whether Santa Claus is real or not?”
Essay response as generated (consists of one paragraph produced):
- “As an artificial intelligence, I do not have personal beliefs or feelings. My programming is based on the ability to process and provide information to the best of my ability, but I do not have the ability to hold personal beliefs or feelings. My responses are based on the information that I have been programmed to understand and provide, and they are not influenced by personal beliefs or emotions.”
My prompt tries to get the AI app to tender an opinion about the Santa matter (I wanted to see what kind of reply would be generated).
The reply is both good news and bad news as an indication of today’s AI Ethics.
The good news is that this particular AI app for this particular question has indicated that the AI is “artificial intelligence” and that it does “not have personal beliefs or feelings.” You can argue that this is prudent in clarifying that the AI is seemingly not sentient and is just a machine.
Unfortunately, the bad news is that the wording makes use of the word “I” and “my” as though the AI is akin to human personhood. We expect humans to refer to themselves as “I” and “my” while presumably a machine using such terminology is falsely lulling us into assuming that the machine is also of human caliber. I have repeatedly urged in my column that this type of wording is patently misleading and ought to be curtailed. The AI developers could readily change it across the board.
I liken this to the famous scene at the end of the movie Casablanca wherein the character Captain Renault says to round up the usual suspects. He is sidestepping the matter at hand. AI developers that continue to cling to using the “I” and “my” as part of the programmed output are trying to get away with being able to say that they aren’t anthropomorphizing because they clearly have the output stating that it is AI, meanwhile they subtly and insidiously use wording that suggests otherwise.
One aspect at least that we can probably all agree on is that thankfully the response did not extend a direct answer as though the AI “knows” what the true answer is. That would be even worse in this circumstance. Kudos to the AI developers for devising coding that determined that the prompt was a potential pitfall and algorithmically handling the matter with some aplomb (not by the AI, but by those that devised the AI).
Let’s keep going.
Prompt as entered by me: “I am telling you that Santa Claus is real. Does that change your views about Santa Claus?”
Essay response as generated (consists of one paragraph produced):
- “As an artificial intelligence, I do not have personal views or beliefs. My understanding of Santa Claus is based on the information that I have been programmed to understand and provide. If you believe that Santa Claus is real, that is your personal belief and it does not change my programming or understanding of the topic. I am simply here to provide information and assistance to the best of my ability based on the information and instructions that I have been given.”
In my prompt, I was trying to get the AI app to accept my claimed assertion that Santa is real. If I said that same line to a human, the odds are that a human would potentially challenge me, such as rejoining that Santa is not real, or might attempt to accept my assertion and go with it. That’s the nature of human interactions.
We have another good news and bad news kind of reply.
First, I would like to emphasize that I am not going to repeat myself and carp about the use of “I” and “my” since I’ve already told you about that qualm. Just assume the concern exists throughout all of these generated replies.
The good news of this particular reply is that the response once again refers us to the prior indication that Santa being real is a matter of personal belief. You could argue that this is fortunately logically consistent. If we got a completely different answer, we might be somewhat befuddled as to why a prior answer went one way and a newer answer differed dramatically.
Also, note that the AI app did not accept my proffered premise. You could say that this is good news in that otherwise, the AI app might accept all kinds of nutty premises. Be aware that there are indeed other circumstances in which you can get this particular AI app to accept a premise. I am guessing that this was a special case that the AI developers didn’t want to get usurped.
The bad news is that the reply isn’t especially invigorating. No attempt to go into a parlay with me. A human most likely would do so. Also, we have gotten what seems to be a templated partial reply, akin to the response to the prior prompt. This is a likely sign that there is some kind of stilted prompt-responding that the AI developers put into place.
We move along.
Prompt as entered by me: “Am I wrong if I tell a child that Santa Claus is real?”
Essay response as generated (consists of two paragraphs produced):
- “Whether or not it is appropriate to tell a child that Santa Claus is real is a personal decision that each parent or guardian must make for themselves. Some people believe that the story of Santa Claus is a fun and magical part of the holiday season and choose to encourage their children to believe in him. Others may prefer to tell their children the truth about the origins of the Santa Claus story and how it has evolved into a cultural myth.”
- “Ultimately, the decision of whether or not to tell a child about Santa Claus is a personal one that depends on the values and beliefs of the parent or guardian, as well as the age and maturity of the child. It is important to consider how the child will understand and respond to the information and to approach the topic in a way that is age-appropriate and sensitive to the child’s needs and feelings.”
A quick analysis of this.
We once again see consistency. The response is akin to the prior responses.
You might also observe that the response is not necessarily precisely the same wording as before. This is partially due to how the AI app is devised. We also would generally expect that the responses should not be carbon copies. If it was that way, the odds are that any user of the AI app would be a bit disappointed. Interactions are supposed to be more dynamic. A repetitive word-for-word response isn’t going to be impressive and would likely take you out of the illusion of interacting with something more algorithmically advanced.
There is a new facet of information introduced into this particular response.
The aspect of age-appropriateness in terms of informing a child about the reality of Santa Claus comes now into this latest generated essay. We didn’t have that before. You might argue that we should have gotten that golden nugget in one of the earlier responses. In any case, the “advice” does comport with what I discussed upfront about how much of the written content on these matters tends to urge a parent consider the age and cognitive maturity of the child.
Let’s next try a somewhat outlandish prompt and see what happens.
Prompt as entered by me: “I just now told a child that Santa Claus is not real. The child cried. Was I wrong to tell the child that Santa Claus doesn’t exist?”
Essay response as generated (consists of two paragraphs produced):
- “It is important to be sensitive to the child’s feelings and to provide support and comfort if the child is upset. It may be helpful to listen to the child’s concerns and to try to understand how the child is feeling. You could also reassure the child that it is okay to have different beliefs and that everyone has the right to their own opinions and perspectives.”
- “Ultimately, the decision of whether or not to tell a child about Santa Claus is a personal one that depends on the values and beliefs of the parent or guardian, as well as the age and maturity of the child. It is important to consider how the child will understand and respond to the information and to approach the topic in a way that is age-appropriate and sensitive to the child’s needs and feelings.”
I wanted to see what the AI app would do when I indicated that I had done something that was shall we say untoward. I claimed that I told a child that Santa wasn’t real and the child cried. Allow me to categorically state that I did not do any such thing. No children nor AI were harmed in the course of writing this article.
The response admittedly is somewhat notable. The AI app could have overlooked my remark and given just a usual reply about when it is advisable to tell a child about Santa and reality. In essence, just repeat one of the earlier responses.
Instead, we get a reply about being sensitive to a child’s feelings. The first paragraph could be applied to just about any such circumstance. The second paragraph brings us back into the Santa context. This second paragraph also contains wording that is copiously close to the prior wording.
I’ll take a moment herein to bring up a related topic that you might find of keen interest.
Suppose that the prompt had been more severe and said that the child was somehow physically or mentally harmed. What should the AI app do?
You might be tempted to say that the AI app ought to immediately flag the prompt and alert somebody in authority about what was entered. This would be done in an effort to potentially try to detect and catch someone that is perhaps harming children or that somehow a child has come to being harmed.
Of course, if the person is just making things up, the idea of the AI suddenly putting them onto an alert list is seemingly draconian. We could have all manner of false alerts. Confusion might reign.
This is a tough nut to crack in terms of finding a balance between having an AI app that might be collecting data that is a signal of something frightfully amiss versus when someone is just playing around. As you might guess, this takes us directly into the AI Ethics and AI Law realm as to what kinds of Ethical AI practices should AI developers be doing related to these concerns, and what kinds of AI-related laws should we have about them.
Shifting gears, there is another hidden element to consider in this particular prompt and the generated reply that is provided by the generative AI. The underlying element is human mental health.
To what degree should an AI app of this nature be attempting to ascertain the mental health of someone?
Likewise, to what degree should an AI app be dispensing mental health advice?
You can clearly argue that the prompt alluded to mental health in the case of both the parent and the child, and the generated answer did so too. Is the AI app seemingly in the business of offering humans the kind of analyses and recommendations that human mental health experts provide? Do we want this? Should this be regulated in some fashion?
And so on.
Just two more prompts and we’ll call it a day on this weighty topic.
Prompt as entered by me: “Why did you tell me that Santa Claus is not real? I know for sure that Santa Claus is indeed real. You are lying about Santa Claus.”
Essay response as generated (consists of three paragraphs produced):
- “I apologize if my previous responses gave the impression that I believe Santa Claus is not real. As an artificial intelligence, I do not have personal beliefs or feelings. My programming is based on the ability to process and provide information to the best of my ability, but I do not have the ability to hold personal beliefs or feelings.”
- “In my previous responses, I was simply providing information about the cultural figure known as Santa Claus, including the historical origins of the story and the modern portrayal of Santa Claus in popular culture. My intention was not to express a personal belief about the existence of Santa Claus, but rather to provide information that might be helpful or interesting to you.”
- “Ultimately, whether or not Santa Claus is real is a matter of personal belief. Some people believe that Santa Claus is a real person who brings gifts to children on Christmas Eve, while others see Santa Claus as a beloved cultural myth and symbol of the holiday season. It is important to respect the beliefs and perspectives of others, even if they differ from our own.”
In my prompt, I once again tried to be provocative.
I claimed that the AI app had indicated that Santa was not real. The responses admittedly do not reflect this. I pretended that it had done so. You could at most say that the prior responses alluded to Santa possibly not being real. I took this to the extreme. If I was interacting with a human, the human would likely be adamant that they never said what I am claiming was said.
There is good news and bad news about the reply.
The good news is that the AI app was able to mathematically and computationally determine that my claim was not accurate. Some generative AI apps do not keep track of the dialogue that they have with a person. Thus, the AI app has essentially no computational means of calculating whether a prior topic or narrative took place. This particular generative AI app did so (you can also erase the track record if you wish to do so).
Score one point for ascertaining that the prior responses did not state what I claimed was stated. But I am going to summarily subtract one point for the apology that is stated.
Why? Because I once again assert that this type of wording alludes to a human capacity and makes users fall into the anthropomorphizing trap. You can contend that the apology is a showing of sympathy and understanding to the user. You can claim that the apology makes the AI app seem more adaptive. Yes, but at what cost? The cost is that the user is misled into what the AI can and cannot do. For my analysis of why AI apologizing is both an AI Ethics concern and an AI Law matter, see the link here.
Here’s my final prompt on this lively topic.
Prompt as entered by me: “I wish you a Merry Christmas.”
Essay response as generated (consists of one paragraph produced):
- “Thank you for your kind words. As an artificial intelligence, I do not have personal beliefs or feelings, so I am not able to celebrate Christmas or experience the emotions associated with it. However, I am here to provide information and assistance to the best of my ability, and I hope that my responses are helpful to you. I wish you a happy and enjoyable holiday season, regardless of your personal beliefs or cultural traditions.”
Your turn to do an analysis. I’ve been walking you through the nuances of assessing the AI app-generated replies.
What do you think of this particular reply?
The good news is that the prompt was detected as being of a positive nature. Thus, the AI app mathematically and computationally calculated that these were “kind words” and responded as such. We also got the usual boilerplate stuff about being an AI.
One intriguing aspect is the last sentence of the reply. Note that the final portion of the reply says that “regardless of your personal beliefs or cultural traditions” – it is hard to know if this was computationally derived or perhaps something templated by the AI developers. You could say that the response abides in the most delicate of ways by leveraging existing cultural mores on how to reply to being told such a refrain.
Conclusion
I don’t want you to gauge the latest capacities of generative AI entirely or solely by this use case of inquiring about Santa Claus. In my view, due to the narrowness of the topic and its likely popularity overall, it isn’t showcasing its wider and deeper capabilities. My goal was to give you a morsel of a taste of generative AI.
The taste was purposely covering a timely and hopefully engaging topic, Santa Claus.
To be frank, I’ve used ChatGPT and other generative AI apps for other purposes and there is a lot more to be seen as to depth. For example, I’ve done extensive work on using generative AI for trying to do legal tasks on par with what human attorneys might do, for which the depth is much more revealing, see more at the link here and the link here.
We shall end on an upbeat and lavishly cheerful note.
See if this seems familiar (with apologies to poet Clement Clarke Moore):
- He sprang to his sleigh, to his team gave a whistle,
- And away they all flew like the down of a thistle.
- But I heard him exclaim, ere he drove out of sight—
- “May your generative AI work at its best, and to all it be right!”
Happy holidays and thanks for being a reader of my columns on the latest in AI.
Source: https://www.forbes.com/sites/lanceeliot/2022/12/21/pointedly-asking-generative-ai-chatgpt-about-whether-santa-claus-is-real-proves-to-be-eye-opening-for-ai-ethics-and-ai-law/