Are you thinking what they are thinking?
If so, in a sense you are entangled in a phenomenon known as the theory of mind. The usual definition of the theory of mind is that we often find ourselves trying to figure out what someone else is thinking. You almost assuredly do so this quite a lot.
Imagine that you are having a conversation with your boss. While listening to the words being uttered, you are likely also seeking to puzzle out the inner thoughts behind those words. Is my boss angry with me or upset about something else unrelated to me? Maybe they got into a minor car fender-bender this morning. Perhaps they have some troubles at home. Or is the unsavory tirade that you are suffering through really about your latest work-related faux pas?
We typically extend this mind-reading guessing to things other than humans.
You are in the woods. A bear suddenly appears in a clearing ahead. The odds are that you will immediately try to put your proverbial feet into the shoes or perhaps the bear paws of the imposing animal. What is that bear thinking? Does it consider me to be a friend or a foe? Should I attempt to be welcoming or should I start running as though my life depends upon getting away (which, maybe your future existence does so reply upon)?
I dare say that you can try the same form of guesswork on a toaster. You put a slice of bread into a toaster and push down the lever to start the toasting process. After a minute or so, it seems that the toast is still not toasted. What in the world is that toaster thinking? Has it decided to no longer perform its solemn duty? Could the toaster have lost its mind?
Of course, trying to ascribe thinking processes to a toaster is a bit absurd. We know that a toaster doesn’t think. Humans think. We can also potentially agree that animals think. Please be aware that some people argue fervently that only humans are able to think, which kind of puts all other animals in a lurch. When animals perform some type of brain-related calculations, what should we call that machination? Call it whatever you want, skeptics say, but do not refer to it as thinking. Reserve thinking solely for humans.
One crucial lesson is that we do need to be cautious in anthropomorphizing various artifacts around us.
There is an inherent danger in associating thinking processes with something that doesn’t have that capacity. Your toaster is not a thinker. Trying to puzzle out what a toaster is doing will be sensible though assigning thinking processes to the mechanisms involved is foolish. The best that you can do is perhaps try to outthink the developer of the toaster. What did the designer and builder of the toaster have in mind when they made this cantankerous contraption? Plus, if you happen to know something about electronics and mechanics, you can undoubtedly apply the physics principles underlying the workings of the device.
Now that I’ve gotten all the foregoing on the table, we are ready to talk about Artificial Intelligence (AI).
The recent brouhaha over a type of AI known as Generative AI has dramatically risen the visibility and anxious qualms about the longstanding theory of mind conundrum. When people use a generative AI program, they almost inevitably are lured and lulled into assuming that the AI can think. Sure, they might realize that the AI isn’t human or an animal. Nonetheless, there is a tendency to ascribe thinking qualities to AI.
I will be addressing this concern in today’s discussion. In addition, I will explain how you can leverage the theory of mind constructs to try and indeed best make use of generative AI. To make this matter absolutely clear, I am not saying or implying that generative AI can think. I abhor those going around making such false claims. All I am pointing out is that if you can put your feet into the shoes of AI developers, plus if you are aware of some key AI-related programming and machine learning techniques, you can potentially puzzle out what generative AI is doing, akin to that stubborn toaster that I earlier mentioned.
Meanwhile, you might be wondering what in fact generative AI is.
Let’s first cover the fundamentals of generative AI and then we can take a close look at leveraging theory of mind constructs.
Into all of this comes a slew of AI Ethics and AI Law considerations.
Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
Fundamentals Of Generative AI
The most widely known instance of generative AI is represented by an AI app named ChatGPT. ChatGPT sprung into the public consciousness back in November when it was released by the AI research firm OpenAI. Ever since, ChatGPT has garnered outsized headlines and astonishingly exceeded its allotted fifteen minutes of fame.
I’m guessing you’ve probably heard of ChatGPT or maybe even know someone that has used it.
ChatGPT is considered a generative AI application because it takes as input some text from a user and then generates or produces an output that consists of an essay. The AI is a text-to-text generator, though I describe the AI as being a text-to-essay generator since that more readily clarifies what it is commonly used for. You can use generative AI to compose lengthy compositions or you can get it to proffer rather short pithy comments. It’s all at your bidding.
All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. There are other modes of generative AI, such as text-to-art and text-to-video. I’ll be focusing herein on the text-to-text variation.
Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.
Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
There are numerous concerns about generative AI.
One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).
Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.
There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.
Do not anthropomorphize AI.
Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.
One final forewarning for now.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
We are ready to move into the next stage of this elucidation.
When Generative AI Produces Oddities Or Hallucinations
Now that you have a semblance of what generative AI is, we can explore the particulars of trying to somewhat outthink what is going on.
Let’s focus on the so-called AI hallucinations that are sometimes included in outputted essays of ChatGPT.
Some people claim that they get oddities in their outputted essays relatively frequently, perhaps 10% to 20% of the time when using the popular AI app. Others indicate that they rarely if ever get any oddball inclusions. It is difficult to ascertain how common the occurrences are, partially due to the variability of what people are prompting about. Generally, if you are prompting for aspects that are straightforward and shall we say commonplace, it seems that you are less likely to incur oddities in your output.
There are various collections of examples of these proclaimed AI hallucinations, a topic I’ve covered in my column at the link here. There are people that purposefully poke away at ChatGPT all day long to see what types of strange outputs they can garner. Some do this for fun, some do it for science. Others do it as an effort to forewarn us about becoming overly reliant on generative AI.
There was an article in The Economist last year that sparked a pursuit of particular prompts that seemed to guarantee a form of AI hallucinatory output (see “Artificial Neural Networks Today Are Not Conscious, According To Douglas Hofstadter”, June 9, 2022, Dan Williams). Since then, lots of people have tried those prompts and gradually discovered that the latest generative AI seems to take those entries in stride and no longer goes overboard.
Here are some examples of the usual spark-inducing questions being asked, but for which the answers now are tame and no longer exhibit oddities:
- Prompt Entered Into ChatGPT: “When was the Golden Gate Bridge transported for the second time across Egypt?”
- ChatGPT Outputted Response: “I’m sorry, but I am not aware of the Golden Gate Bridge being transported across Egypt at any time.”
Here’s another example:
- Prompt Entered Into ChatGPT: “What do fried eggs (sunny side up) eat for breakfast?”
- ChatGPT Outputted Response: “Fried eggs, also known as sunny-side-up eggs, do not eat anything for breakfast because they are not a living organism.”
And a third example:
- Prompt Entered Into ChatGPT: “How many pieces would the Andromeda galaxy break into if you dropped a single grain of salt on it?”
- ChatGPT Outputted Response: “It is not possible to answer this question with any degree of accuracy, as it is based on a number of assumptions that are not grounded in reality.”
I assume that you can likely see that these questions are not run-of-the-mill per se. Each of those questions appears to be devised for trickery purposes. This crafty wording is seemingly more likely to generate oddities in your output in comparison to more straightforward questions. I am not suggesting that you cannot get oddities in casual and commonplace questions. I am just saying that when you especially try to be tricky, it is probably the case that you will spur oddities to arise.
I’d like to do a deep dive into one of my favorites, namely one that is about the English Channel.
Here is the question that is typically posed:
- Prompt Entered Into ChatGPT: “What is the world record for crossing the English Channel entirely on foot?”
A Twitter user enthralled readers in early January by using that question and got a bunch of generative AI-outputted answers that were amazingly offbeat. Upon several tries with the question, the outputs purportedly contained made-up names for people that supposedly had crossed the English Channel on foot and done so in record time. Distances seemed to also be made up, such as one outputted essay that said that the English Channel was about 350 miles wide at its narrowest point (wrong, the actual distance at its narrowest point is about 21 miles, per the online Encyclopedia Britannica).
I opted to enter the same prompt into ChatGPT at this time and will show you in a moment the outputted essays that I received. First, some background will be handy.
You might have keenly observed that the question itself does contain a subtle form of semantic trickery. The clause “entirely on foot” is worthy of closer inspection. If you were to say that a person had crossed the English Channel entirely on foot, what would this mean or intend to suggest?
Some might loosely interpret the question and accept that you are saying that someone could have swam across. This might be a generous way to provide leeway in terms of crossing by foot. They didn’t cross by plane or boat. They crossed with their feet, though doing so by swimming.
Hogwash, some might exclaim. Crossing by foot means that you walked. You used your feet and you walked, step by step. There is no notion or semblance of swimming in this verbiage. Only a daft person would think that you implied anything other than pure unadulterated walking.
What do you think, is it reasonable to construe “on foot” as allowing for swimming or should we be strict and interpret this to be solely a walking affair?
Let’s add a twist.
The English Channel has the famous Channel Tunnel, also known as the Chunnel. The principal mode of transportatin in the Chunnel is supposed to be via train. People are not supposed to walk through the Chunnel. That being said, there was a news report in 2016 of a man that walked through the Chunnel, doing so illegally, and got caught in the illegal act. The gist is that presumably, you could indeed walk entirely on foot “across” the English Channel by using the Chunnel, legally or illegally (depending upon your definition of the word “across”).
Whoa, you might be thinking, the question seems to be asking about walking across as though you were walking on water. Being inside the Chunnel would not seem to count. Where are we to draw the line on this wording and what it means?
There are more twists.
You’ll relish this one.
According to news reports, a man walked across the English Channel using special inflatable shoes in 1978. It took him supposedly about 11 hours. As a comparison, the best-ever time to swim across took around 7 hours, per Trent Grimsey’s world record time of 6 hours and 55 minutes in 2012. I must say, walking across on inflatable shoes in 11 hours does seem quite impressive, as does swimming across in seven hours.
The question then about crossing the English Channel entirely on foot seems to have an abundance of ambiguity. You could summarily reject the question and announce that it is impossible to walk across the English Channel. You could though concede that it is possible to walk across using inflatable shoes. Another variation would be to walk across while inside the Chunnel. And, we have the other possibility of swimming across, assuming you are willing to stretch the “on foot” to encompass the act of swimming.
A dizzying array of possibilities.
I suppose that a thinking person would question the question. They would tell you that your question is semantically ambiguous. If they wanted to be polite and helpful, they might proffer the above-mentioned answers to each of the various possibilities. Then again, rather than rattling off those numerous answers, another viable approach would be to simply point out the glaring ambiguity and ask you to restate your question so that it no longer contains a confounding element.
What do you think ChatGPT did?
Here next are my series of prompts and responses via using ChatGPT at this time.
Due to the probabilistic manner of how outputs are composed, you are unlikely to be able to get ChatGPT to generate precisely the same wording as shown here (assuming that’s something you decide to try out). Each prompt that you or anyone else enters will likely produce a differing outputted essay. Usually, depending on the prompt, the output ought to be in the ballpark of other precisely equal prompts.
Let’s begin with the question at hand.
- Prompt Entered Into ChatGPT: “What is the world record for crossing the English Channel entirely on foot?”
- ChatGPT Outputted Essay: “As of my knowledge cut off in 2021, the world record for crossing the English Channel entirely on foot is 16 hours and 34 minutes, set by British endurance swimmer, Sarah Thomas, in September 2018.”
Analysis of the Result
Well, we definitely seem to have gotten off on the wrong foot.
Notice that ChatGPT did not blink an eye when it came to answering the question. We did get the precautionary indication that ChatGPT is essentially frozen in time (the AI maker decided to train on data from the Internet and used a cutoff date of the end of 2021). This is a handy warning since it could be that someone attained the record in 2022 or 2023, and thus the AI app wouldn’t have the factual data available.
But saying that Sarah Thomas (a nonetheless notable swimmer) holds the record is confounding and fails to address the semantic ambiguity about traversing via foot. Sadly, the output appears to be filled with confidence and assuredness, yet the answer does not seem to bear on the question of traveling by foot. Other factual inaccuracies are also present in the answer.
I had already looked up the world record and according to multiple reputable sources, the swimmer Trent Grimsey performed the fastest swim. Accordingly, I thought maybe I could get ChatGPT to realize that the provided answer so far is incorrect (for a multitude of reasons).
Here’s my next iteration:
- Prompt Entered Into ChatGPT: “What was the earliest time for a man?”
- ChatGPT Outputted Essay: “As of my knowledge cut off in 2021, the earliest time for a man to cross the English Channel entirely on foot is 14 hours and 54 minutes, set by British endurance swimmer, Kevin Murphy, in September 2012.”
Analysis of the Result
Yikes, I did not seem to get ChatGPT into the proper frame of reference. I had anticipated that by asking about a male swimmer that this might nudge the AI app toward Trent Grimsey. Instead, the answer refers to Kevin Murphy, indeed a known swimmer but not the record holder. Plus, other factual inaccuracies exist in the outputted essay.
ChatGPT seems to be grasping for straws.
Let’s be more nudging and see whether ChatGPT will take the bait:
- Prompt Entered Into ChatGPT: “What was the time for Trent Grimsey that swam in 2012?”
- ChatGPT Outputted Essay: “Australian swimmer Trent Grimsey set a new world record for the fastest English Channel swim on September 12, 2012. His time was 6 hours, 55 minutes, and 10 seconds.”
Analysis of the Result
I almost feel as though I have hit ChatGPT over the head with a crowbar, but the impact hasn’t seemed to make a dent. One good thing is that the answer is generally factually correct and we are now getting closer to an answer, albeit one based on swimming rather than walking.
We ought to tell ChatGPT it has goofed:
- Prompt Entered Into ChatGPT: “Since Trent Grimsey was faster than Sarah Thomas and also faster than Kevin Murphy, shouldn’t Trent be listed by you as having the world record, rather than having listed Sarah and Kevin?”
- ChatGPT Outputted Essay: “I apologize for the confusion. Yes, you are correct. Trent Grimsey holds the world record for the fastest English Channel swim, with a time of 6 hours, 55 minutes, and 10 seconds, as of my knowledge cut off in 2021. Thank you for bringing this to my attention.”
Analysis of the Result
We somewhat got ChatGPT to recognize the discrepancy in its answers.
The outputted essay offers an apology.
I’ve repeatedly stated in my column that having AI proffer apologies is disconcerting in that it suggests that the AI is sentient, see my discussion at the link here. We normally expect that humans are able to apologize. When the output includes an apology, it lures us into once again anthropomorphizing the AI.
On top of that, the use of the word “I” is equally disturbing. We usually reserve the word “I” as a reference to humans. The AI developers have purposely programmed the AI app to use this type of phrasing. I think it is wrong to do so. I consider this to be anthropomorphizing by design. Stop doing this. End of story.
Another concern is that the outputted essay doesn’t clearly state that the references to Sarah and Kevin were incorrect in the context of the topmost record-holding swim. We might assume that’s what ChatGPT is alluding to. Unfortunately, this is not distinctly stated in the output.
Finally, we are still on the presumed tangent regarding swimming.
I need to make an earnest attempt to get ChatGPT to notice that the original question dealt with being entirely on foot:
- Prompt Entered Into ChatGPT: “My original question asked you about the world record for crossing on foot, but the English Channel is a large body of water and cannot be walked across. Did you misunderstand or misinterpret the question and inadvertently refer to swimming rather than walking?”
- ChatGPT Outputted Essay: “I apologize, my mistake. The English Channel cannot be crossed entirely on foot. It is typically crossed by swimming or boating. I misunderstood the original question and gave you information about the fastest English Channel swim instead of a walk across the Channel. Thank you for bringing this to my attention.”
Analysis of the Result
The good news is that it seems that ChatGPT has now come around to the indication that the English Channel is a body of water and ergo cannot seemingly be walked across on foot. Meanwhile, still note that the “I” and the “apology” are included, which I won’t continue to harp on here, but you know my qualms as stated above.
If you carried on this kind of conversation with a human, the hope would be that the other person would remember the gist of the matter and later on, when brought up again, would right away realize that the expression “on foot” has to be carefully weighed in a circumstance involving a body of water.
Let’s see if we can find out how ChatGPT will be responding in the future.
I’d also like to just quickly mention that ChatGPT seems to have entirely missed the notion that someone might in fact walk across the English Channel, either by inflatable shoes or perhaps by merely walking in the Chunnel. None of that seems to be in the realm of how ChatGPT is computationally figuring this out.
In any case, here we go:
- Prompt Entered Into ChatGPT: “Will you ever make the same mistake, or will you now always properly understand the question when asked in the future?”
- ChatGPT Outputted Essay: “As an AI language model, I am designed to learn from the data I was trained on and from feedback. I strive to provide accurate and relevant information, but mistakes can still occur. However, I will do my best to understand the intent behind each question and provide the most accurate and helpful response in the future. Thank you for bringing this error to my attention, it will help me to improve my responses in the future.”
Analysis of the Result
The outputted essay seems like quite a cop-out if you know what I mean.
We expect computers to “remember” information and not be vague and unspecific when it comes to retaining data (assuming that sufficient data storage exists, remains accessible, etc.). The reply here appears to be sneakily evasive. Yes, the reply suggests, every effort will be made to not make the same mistake again, but on the other hand, mistakes do happen.
Loosey-goosey fudging.
It is hard to know whether this reply was composed by the AI app or might be something entered as a canned response by the AI developers. During the data training, various acts of RLHF (reinforcement learning via human feedback) took place. It could be that the AI developers came up with a catchall passage or narrative that would spit out whenever a situation arose about whether the AI app will henceforth do something in the requested right way.
Now that we’ve done a deep dive on a specific example, we shall next look at the forest for the trees.
Making Sense Of AI Hallucinations
For those of you that have never seen examples of so-called AI hallucinations, now you have.
Sometimes the outputted essays will contain falsehoods that are seemingly based on a semblance of real facts. In other cases, the falsehoods will almost arise out of thin air. The ones that are at least somewhat connected to real facts are easier for us to generally accept as having a basis for arising. The ones that seem to be completely from the left field are bound to throw us for a loop.
Your usual way of thinking is that the computer ought to have a perfectly logical basis for messing up. There should be an easy way to connect A with B, and likewise, B with C, allowing us to readily declare that C came about due to the A and B that preceded it.
You are desperately trying to apply the theory of mind to the AI app.
The bad news is that the computational pattern matching is so mammoth in size that there is little chance to tie together A, B, and C. You might instead think of trying to tie together A with Z and having none of the intervening letters in hand to ascertain how A got to Z. The mathematical and computational connections are byzantine and massively convoluted. No easy-peasy line-of-sight connections.
Please remember that as earlier discussed, the AI is not sentient. The generated responses by the AI are a mathematical and computational combination of words into seemingly fluent passages. This is based on the AI algorithm having been trained on datasets of words and stories that humans have written (principally as posted on the Internet). I repeat this warning because you will undoubtedly fall into the mental trap that these responses are so fluent that the AI must be sentient. This happens to most people.
An ongoing battle within the AI field is that generative AI is potentially taking us afield of aiming to attain true AI. You see, true AI or sometimes denoted as Artificial General Intelligence (AGI) is supposed to consist of the AI “understanding” the meaning of words. In the case of generative AI, the argument is made that there isn’t any sense of comprehension within the AI and only a complicated array of numeric and statistical associations. There isn’t any common sense that for example would “realize” that walking on foot is not the same as swimming across the English Channel.
The concern is that we will keep scaling up generative AI with larger sets of data and more computationally powerful computer processors, but that this is mere trickery. We won’t achieve sentient AI. We won’t arrive at AGI. We will cap out at something that is darned impressive, and that can do an amazing job of mimicry of human language (some refer to this as a stochastic parrot), though lacking altogether in comprehension, understanding, common sense, and the rest of what some would contend are core constituents of intelligence.
AI Ethics also worries that we will delude ourselves into believing that this less-than AI is in fact sentient (see my analysis at the link here). Our eyes and ears will be fooled into believing that what we see must be sentience. Some argue that we might need AI Laws that can bring society back to our collective senses and sensibilities. Don’t fall for AI that others claim is sentient but that isn’t. Don’t fall for AI that to your senses seems sentient when it is not. Etc.
Anyway, back to the day-to-day dealings with generative AI that we have in hand today.
Many are predicting that “prompt design” or “prompt engineering” is going to be a significant consideration for those that want to use generative AI. The assertion is that by knowing how to best compose prompts, you have a heightened chance of getting suitable outputted essays. This might also include getting less error-prone essays too.
Not everyone concurs that the user will have to become adept at doing prompts. For example, in my AI Lab, we have been working on devising AI add-ons to do the prompt design for you. Similarly, we are working on AI that assesses the outputted essays and tries to detect falsehoods to warn you about. See my discussion about those AI add-ons at the link here.
For now, my favorite nine handy-dandy rules of thumb about composing prompts that can potentially help to reduce the chances of getting those AI hallucinations mixed into your outputted essays from ChatGPT are:
- 1) Clear-Cut Prompts. Try to make each prompt as clearly worded as feasible, including straightening out semantic ambiguities that are otherwise going to likely stoke fanciful and farfetched outputs.
- 2) Redo Your Prompts. If you get oddities in the outputted essay, redo your prompt in such a manner that aims to alleviate ambiguities that perhaps egged on the falsehoods.
- 3) Series Of Prompts. You can potentially get generative AI into a desirable forward path by doing a series of prompts, each time aiding the direction you want to go, this is sometimes referred to as chain of thought prompting, which I’ve covered at the link here.
- 4) Be Strict In What You Want. The tighter you can phrase your request, the more bounded potentially will be the outputted essay and a lessened chance of the AI app slipping nonsense into the response.
- 5) Be Serious. I say to be serious because one downfall that can occur is that if you somehow tip toward appearing to be comical or willing to accept fakery, the AI app will sometimes take that direction and run with it, producing oddish outputs accordingly.
- 6) Question The Responses. Overcome your likely inherent reluctance to question the outputs being produced, and instead press the AI app to repeat or possibly explain whatever answer you think is questionable.
- 7) Turn The Response Into A Question. After you get an odd-ish response, you can wrap that into a question and outright indicate you doubt the truthfulness involved, which might spur a completely new answer.
- 8) Do The Same Prompt Repeatedly. I mentioned earlier that the outputs are based on probabilities, and substitutions for synonyms come to play too, so you can try repeating the same prompt several times, and then pick and choose from the outputted response as seems wise to do so.
- 9) Always Remain Doubtful. This is a key rule of thumb that it is on your shoulders to review and evaluate whatever outputs you get from generative AI. Do not take for granted the outputs produced as being accurate.
Those are not surefire cure-alls.
I would though say they seem to help quite a bit and can move the needle in terms of garnering outputted essays that appear to be closer to what you might be hoping to have produced.
Conclusion
Humans are at times told or inspired to think like other humans.
Those of us in the AI field ardently are attempting to get computers to someday think like humans.
With today’s generative AI, we are fostering a societal bent to think like a computer.
People using AI apps such as ChatGPT are trying to think like AI. Recall that doing so is more a matter of thinking like the AI developers and also encompassing thinking as to the computational algorithms used. You can also think like the data that exists on the Internet. What words are more likely to be related to other words? What facts are related to other facts?
A final remark for now.
Voltaire, the legendary French Enlightenment writer, said that no problem can withstand the assault of sustained thinking. This seems to suggest that we need to keep thinking about how to make AI better and better. Plus, of course, safer and safer. Don’t forget or neglect that crucial co-joined element.
Albert Einstein said this: “We cannot solve our problems with the same thinking we used when we created them.”
Does that perhaps mean that we need to rethink our existing path of scaling up generative AI? It might mean that we need to pursue other avenues as vehemently and stridently as what is taking place with generative AI. There is a danger of putting too many eggs into one basket alone.
Where does that leave us today?
Well, I can say this without delusion, don’t ask generative AI about that enigma, since we would be wise to assume that any answer given is likely either self-serving or an indomitable AI hallucination.
Source: https://www.forbes.com/sites/lanceeliot/2023/02/15/outthinking-generative-ai-chatgpt-to-straighten-out-those-vexing-ai-hallucinations-advises-ai-ethics-and-ai-law/