Force of habit.
I’m sure you’ve experienced it.
The underlying notion concerning force of habit is that sometimes you do things that are based on a somewhat mindless reliance on having done something over and over again. The habit takes over your mental prowess, perhaps overriding a capacity to see or do things anew.
On the one hand, you can contend that the force of habit can be quite useful. Instead of consuming your mental processes by trying to think overtly about a particular matter at hand, you seem to be able to get a task done without much mental exertion. There are commentators that typically refer to this as using your memory muscle as though akin to conditioning your mind is analogous to your physical body for performing a considered axiomatic response.
A crucial downside of relying upon a force of habit can be that you miss out on doing things in better ways or fail to take advantage of an emerging opportunity. You are set in your ways and don’t exploit or leverage viable alternatives that might be useful to you. It is the classic stick-in-the-mud (perhaps this age-old phrase ought to be more fully expressed as a stick firmly and unyieldingly stuck in the mud).
In today’s column, I am going to indicate how force of habit is causing many people to undershoot when it comes to using Artificial Intelligence (AI).
The particular context will involve the use of AI that is nowadays referred to as Generative AI and I’ll be showcasing the force of habit aspects via the use of a widely popularized and greatly heralded AI app called ChatGPT. I think you’ll enjoy the exploration since I will be providing actual inputs and outputs to show you ChatGPT and do so by covering the seemingly innocuous task of devising a cooking recipe. The task itself is a relatively ordinary chore. We can nonetheless glean some quite useful insights into how people are inadvertently acting in what might be coined as an AI rookie manner and dominated by an ingrained force of habit.
By the end of this discussion, you won’t be making that same AI rookie mistake.
On a grander scale, all of this has vital significance related to AI Ethics and AI Law. You might find of interest my extensive and ongoing coverage of the latest in AI Ethics and AI Law at the link here and the link here, just to name a few. A sobering and judicious amount of attention to AI Ethics and AI Law entails how we make use of AI, including the good uses of AI and averting or at least mitigating the bad uses of AI.
A specific type of AI known as Generative AI has dominated social media and the news recently when it comes to talking about where AI is and where it might be headed. This was sparked by the release of an AI app that employs generative AI, the ChatGPT app developed by the organization OpenAI.
ChatGPT is a general-purpose AI interactive system, essentially a seemingly innocuous general chatbot, nonetheless, it is actively and avidly being used by people in ways that are catching many entirely off-guard. For example, a prominent concern is that ChatGPT and other similar generative AI apps will allow students to cheat on their written essays, perhaps even encouraging or spurring pupils to do so. Students that are lazy or feel they are boxed in without time or skill to do an essay might readily invoke a generative AI app to write their essay for them. I’ll say more about this in a moment. For my detailed analysis of how ChatGPT allows this, and what teachers ought to be doing, see the link here.
If you are especially interested in the rapidly expanding brouhaha about ChatGPT and generative AI, I’ve been doing a expose series in my column that you might find informative and engaging.
I opted to review how generative AI and ChatGPT are being used for mental health advice, a troublesome trend, per my focused analysis at the link here. If you want to know what is likely to unfold about AI throughout 2023, including upcoming advances in generative AI and ChatGPT, you’ll want to read my comprehensive list of 2023 predictions at the link here. I also did a seasonally flavored tongue-in-cheek examination pertaining to a Santa-related context involving ChatGPT and generative AI at the link here. On an ominous note, some schemers have figured out how to use generative AI and ChatGPT to do wrongdoing, including generating scam emails and even producing programming code for malware, see my analysis at the link here.
I’ll be explaining the foundations herein regarding Generative AI and ChatGPT, so please hang in there and you’ll get the general scoop.
Meanwhile, if you take a look at social media, you will see people that are proclaiming ChatGPT and generative AI as the best thing since sliced bread. Some suggest that this is in fact sentient AI (nope, they are wrong!). Others worry that people are getting ahead of themselves. They are seeing what they want to see. They have taken a shiny new toy and shown exactly why we can’t have catchy new things.
Those in AI Ethics and AI Law are notably worried about this burgeoning trend, and rightfully so.
You might politely say that some people are overshooting what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action. Do not anthropomorphize AI. Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform.
On the other end of that extreme is the tendency at times to undershoot what today’s AI can do.
Yes, people are using AI such as generative AI and ChatGPT that in a sense are undershooting or failing to get the full experience associated with contemporary AI. This is often due to a force of habit mindset by those that do this kind of undershooting. You can debate whether undershooting is a problem or not. Maybe it is safer to presume that AI is more limited than it is. Certainly, overshooting does have the lion’s share of dangers. We ought though to be using AI in whatever full glory it can provide. Maximizing the advantages of AI can lead to potentially large upsides. If you undercut what AI can do, you are missing out on possible opportunities and payoffs.
Before we take a look at how some are undershooting AI and especially ChatGPT, I’d like to add some further context to the force of habit issue.
Let’s do so and then we’ll come back around to looking at generative AI and ChatGPT.
The Forces Underlying Force Of Habit
I often cite the early days of smartphones to illustrate the point about the powerful impact of a force of habit.
When cell phones first became popularized, people used them primarily for making phone calls, appropriately so. As smartphones emerged, a camera capability was added. You might recall or have heard that people were not accustomed to using their portable phones to take pictures.
A person wanting to take a picture would reflexively seek a conventional camera and forsake using their camera-equipped portable phone. It was almost laughable to see someone agonizing that they left their conventional camera at home and were vexed that they couldn’t take a snapshot at an opportune moment. After being reminded that the phone in their hand could do so, they would sheepishly take the prized shot.
This same cycle repeated itself when smartphones added a video recording capability. This was slightly different than the first example. People would realize that they could use their handheld phones to take videos, but the people being filmed acted as though the phone would only take still snapshots. Again, it was nearly laughable that a group of people would all freeze to allow a photo to be taken, while the person with the video-recording equipped smartphone would have to implore them to wave their arms and act alive.
You could say that force of habit clouded the minds of some people that weren’t used to using a smartphone to take pictures in the former case, meaning via force of habit they assumed that photos had to be taken by use of a conventional camera. In the second case, people had a force of habit that videos could only be taken via a distinctive handheld video recorder, and that smartphones only take still photos.
I trust that this establishes a revealing context of how the force of habit can arise.
Shift gears into the AI realm.
In the matter of generative AI and ChatGPT, a crucial element of this type of AI is that it is supposed to be treated as though it is conversational. You have probably used a chatbot or similar tech either at home or at work. If you’ve ever used Alexa or Siri, you have used a conversational-oriented AI system, whether you realized it or not.
In conversational AI, the aim is to allow a human to converse with an AI app. The human should be able to use the same conversational capacities as they might do with a fellow human. Well, that’s the aspirational goal. Not all conversational AI is that good. We still have hurdles to overcome for that level of fluency.
You have undoubtedly used Alexa or Siri and found that those AI conversational apps are at times quite wanting. You say something in fluent English, for example, and expect that the AI will get the gist of your indication. Regrettably, those AI systems often respond in ways that illustrate they didn’t get the nature of your command or request. This can be irritating. This can be exasperating.
Eventually, you somewhat relent or give up trying to be all-out conversational. The AI conversational capability turns out to be restrictive due to the AI not being up to snuff. People gradually figure out that they have to “dumb down” their utterances to interact with these alleged conversationally fluent AI apps. You start to talk to the AI app or text it with shortened and altogether rudimentary sentences. The hope is that if you keep things short and sweet, the chances of the AI getting things correct will be increased.
Perhaps you’ve seen people that go through this type of arc. They begin with great enthusiasm when using an AI conversational app. Upon realizing that half of the time or more, the AI completely misses the mark in terms of what is being stated, the human becomes crestfallen. They want to continue using AI but realize it is useless to use their human fluency in the language. Ultimately, each person concocts their own shorthand that they believe, and hope will appease the AI app and let the AI undertake their human-uttered instructions.
Oh, how the mighty shall fall, meaning that the touted AI conversational apps often are a far cry from their proclaimed proclivities.
Here’s the twist to all of this.
People that get accustomed to restricting or limiting their conversational interactions with AI apps will fall into a force of habit to always do so. This makes sense. You don’t want to reinvent the wheel each time that you use a conversational AI app that you assume will be as limited as the last several that you’ve used. Thus, might as well rely upon your already school of hard knocks lessons learned when conversing with other disappointingly less-than-fluent AI apps.
The twist to this is that the latest in generative AI such as ChatGPT tends to be a step up in the conversational stepwise refinements being made. Whatever prior cruder and more limited AI apps that you’ve used are likely much less conversationally capable than these latest generative AI apps.
You need to shake out of your noggin the prior conversational AI disappointments and be willing to try something anew. Give these new generative AI apps a fighting chance to strut their stuff. I think that you will relish doing so. It can be quite surprising and uplifting to witness the progress being made in AI.
Now, please do not misinterpret what I am saying. This is not an endorsement that today’s generative AI is somehow fully conversational and fluent. It is not. I am just emphasizing that it is better than it once was. You can up your game because the AI game has also been upped. Again, we still have a long ways to go.
Let me give you an example of what I am referring to.
A very popular social media vlogger that does videos about cooking was quick to jump on the ChatGPT and generative AI bandwagon by opting to use the AI app for one of her video segments. She logged into ChatGPT and asked it to produce a recipe for her. She then proceeded to try and cook the indicated meal using the AI-generated recipe.
All in all, this seems pretty sensible and an exciting way to showcase the latest in AI.
For any modern-day AI person, a somewhat sad or at least disappointing thing happened along this cooking journey. The vlogger essentially generated the recipe as though the AI was akin to a cookbook. After telling the AI the overall type of meal desired, the AI app generated a recipe for the vlogger. The vlogger then went ahead and tried to cook the meal, but doing so raised various questions. Why didn’t the recipe has some other ingredients that the vlogger thought ought to be included? Why didn’t the AI explain how to do some of the complicated parts of the cooking effort?
These types of questions were repeatedly brought up in the video segment.
Most viewers would have likely nodded their heads and thought that this is indeed the usual set of problems when you use a cookbook or look up a recipe on the Internet. You get essentially a printed-out list of ingredients and directions. When you try to use those in real life, you find that sometimes steps are missing or confusing.
Dreamily, it would be fantastic to interact with the chef that made the recipe. You could ask them these kinds of pointed questions. You would be able to converse with the chef. Instead, you merely have a static list of cooking instructions and are unable to discern the finer aspects that aren’t stated on the paper.
Whoa, wait for a second, remember that I have been pounding away herein about the conversational facets of generative AI and ChatGPT. You aren’t supposed to just ask a question and walk away once an answer is devised and presented. The better use is to carry on a conversation with the AI app. Go ahead and ask those questions that you might have asked of a human chef.
By force of habit, you might not even think to do so. It seems that this is indeed what happened in the case of the cooking vlogger. Prior uses of conversational AI can condition your own mind to treat the latest in AI as though it is little more than a stylized Internet search engine. Enter your query. Get a look at what comes back. Pick one. Proceed from there.
With generative AI, you should consider that your starter prompt is just the beginning of an informative and invigorating conversational voyage.
I tell people to keep these types of conversational nudges in their mental toolkits when using generative AI such as ChatGPT (try any or all of these sneaks when interacting):
- Tell the AI to be curt in its responses and you’ll get a more to-the-point response
- Tell the AI to be elaborate in its responses and you’ll get longer amplifications
- Ask the AI to explain what has been stated so that you can be more informed
- Proceed to explain yourself and see what the AI responds with as to your understanding
- Disagree with the AI as to its stated responses and prod the AI into defending things
- Indicate you want a summary or recap of the AI responses to verify what has been stated
- Pivot the conversation as desired to related or different topics (side tangents are okay)
- Make a pretend type of scenario that you want the AI to contextually include
- Confirm something to see what the AI states as to confirming or disconfirming your confirmation
- Etc.
Those aspects will allow you to see how far the conversational AI app can stretch. I dare say you might be taken aback at how far this can go. In some ways, those aforementioned suggestions are similar to what you might do if interacting with a human. Think about that. If you were conversing with a human, all of those practices might better enable a more evocative conversation.
Unlike conversing with a human, you don’t have to be worried about hurting the feelings of the other participant in the conversation. The AI is a machine. You can be abrupt. You can be abrasive, though this won’t help the circumstances (I’ll revisit this shortly).
There is a slew of AI Ethics and AI Law considerations that arise.
First, do not let the advancements in conversational techniques and technologies of AI draw you into the anthropomorphizing of AI. It is an easy mental trap to fall into. Don’t get sucked in.
Second, the notion of not worrying about hurting the feelings of the AI has led some to warn of a slippery slope. If you are abrasive to conversational AI, you might let this become your norm overall. You will gradually be abrasive to humans too. It isn’t that being abrasive to the AI is bad per se (well, some worry about a future existential risk AI that won’t be keen on this, see my discussion at the link here), it is instead that you are forming a habit of being abrasive all told.
Third, some AI researchers and AI developers have chosen to fight back, as it were, by programming the AI to seem as though it does have feelings, see my coverage at the link here. Sometimes this is explicitly programmed, while sometimes it is based on pattern matching associated with human interactions (i.e., study how humans interact and then have the AI mimic what a human does when the other person is being abrasive). The belief is that this will curtail humans from sliding down the slippery slope that I just mentioned.
The other side of that coin is that if the AI appears to have feelings, it once again reinforces the already likely tendency for humans to anthropomorphize the AI. In that case, which is worse, the so-called cure or the malady that underlies the issue?
For more about key AI Ethics principles and the ongoing saga of trying to get AI developers and those that operate AI to adopt Ethical AI practices, see my coverage at the link here. Expect new laws about AI to emerge at the federal, state, city, and local levels, such as the New York City law on AI audits (see my analysis at the link here), and a wave of global international AI-related laws is coming too, see my updates at the link here.
At this juncture of my herein discussion, it might be handy to share with you some notable details about how generative AI works. I also think you might find useful additional background about ChatGPT.
In brief, generative AI is a particular type of AI that composes text as though the text was written by the human hand and mind. All you need to do is enter a prompt, such as a sentence like “Tell me about Abraham Lincoln” and generative AI will provide you with an essay about Lincoln. This is commonly classified as generative AI that performs text-to-text or some prefer to call it text-to-essay output. You might have heard about other modes of generative AI, such as text-to-art and text-to-video.
Your first thought might be that this does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln.
The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.
Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
In a moment, I’ll showcase to you what happens when you enter questions or prompts into generative AI. I will make use of the latest version of ChatGPT to enter my prompts and have collected the “answers” or essays generated by the AI (note that the same can be done with the numerous other available generative AI apps; I’ve opted to use ChatGPT because it is getting its five minutes of fame right now).
Sometimes, a generative AI app picks up falsehoods amid the training data of unreliable info across the Internet. There is no “common sense” in generative AI to determine what is true versus false. Furthermore, very few AI apps have any cross-checking, and nor do they showcase any probabilities associated with what they are conveying.
The bottom-line result is that you get a response that looks and feels like it exudes great assurance and must be entirely correct. Not so. There is even a chance that the AI computationally made-up stuff, which in AI parlance is referred to as AI hallucinations (a coined term that I decidedly don’t like), see my discussion at the link here.
The makers of ChatGPT underwent a concerted effort to try and reduce the bad stuff outputs. For example, they used a variant of what is known as RLHF (Reinforcement Learning from Human Feedback), whereby before they released the AI to the public, they had hired humans to examine various outputs and indicate to the AI whether there were things wrong with those outputs such as perhaps showcasing biases, foul words, and the like. By providing this feedback, the AI app was able to adjust computationally and mathematically toward reducing the emitting of such content. Note that this isn’t a guaranteed ironclad method and there are still ways that such content can be emitted by the AI app.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his own private jet, you would undoubtedly know that this is malarkey. Unfortunately, some people might not discern that jets weren’t around in his day, or they might know but fail to notice that the essay makes this bold and obviously false claim.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
Are you ready to jump in?
Prepare yourself.
Using ChatGPT Conversationally For Delicious Meal Recipe Making
It is time to craft a recipe.
Yummy, looking forward to a dining-related exercise.
I am not the type of person that keeps recipes around in my kitchen. Yes, I should, but I don’t. Ergo, I decided to first find a recipe online via a conventional search engine that might be interesting to explore. This will help in doing a run-through with ChatGPT. We’ll have a convenient base for comparison.
Upon looking around at various recipes that were listed as a result of my online Internet search, I discovered that Food & Wine had posted a handy-dandy list of the twenty-five most popular recipes of 2022. Within their list of the 2022 most popular recipes, I saw one that especially caught my eye, namely a recipe for turmeric-poached eggs with chive biscuits and lobster gravy.
Sounds mouthwatering.
Per the posting, which was entitled “Turmeric-Poached Eggs with Chive Biscuits and Lobster Gravy” and found within their overall list of the most popular twenty-five recipes of 2022 (in an article entitled “The 25 Most Popular Recipes of 2022, According to Food & Wine Readers”, December 9, 2022), they said this about the scrumptious dish:
· “This decadent brunch dish is reminiscent of crawfish étouffée, but with the West Coast vibes found all over the menu at chef Brooke Williamson’s beachside restaurant complex, Playa Provisions. Lobster lends the gravy-rich flavor, while the turmeric eggs add a sunny pop of color. Make the lobster gravy the day beforehand and reheat it gently to make brunch an easier lift.”
Great, I want to make this, for sure.
The recipe says that these are the ingredients and the directions at a high level:
“Ingredients”
- Chive Biscuits
- Lobster Gravy
- Turmeric-Poached Eggs
- Additional Ingredients
“Directions”
- Prepare the Chive Biscuits
- Prepare the Lobster Gravy
- Prepare the Turmeric-Poached Eggs
- Assemble the Plate
I won’t show the whole recipe here and will just focus on the preparation of the turmeric-poached eggs.
Here’s what the posting said about the turmeric-poached eggs:
“Turmeric-Poached Eggs”
- 8 cups water
- 2 tablespoons apple cider vinegar
- 2 teaspoons ground turmeric
- 6 large eggs, cold
“Directions”
- “Prepare the Turmeric-Poached Eggs: Bring water to a simmer in a medium pot over medium. Whisk in vinegar and turmeric. Crack each egg into a small individual bowl. Working with 1 egg at a time, gently slip eggs into simmering water. Cook eggs until whites are set but yolks are still runny, 3 to 4 minutes. Using a slotted spoon, transfer eggs to a paper towel-lined plate to soak up any water. Eggs may be covered with plastic wrap and kept at room temperature up to 1 hour, or covered and stored in refrigerator up to 1 day (reheat in simmering water until warm, about 1 minute).”
Okay, I think you’ve got the gist of the recipe.
You can find lots of similar recipes online. There isn’t anything particularly novel or extraordinary about this particular recipe. Nonetheless, it does look like it would do the trick and I am looking forward to trying to make this meal.
Suppose though that I have questions about the recipe. I am sorely out of luck. This is a recipe that is cast in stone. I found it via a conventional online search. The recipe is what it is. I cannot ask anyone about the recipe other than perhaps going to some online connoisseurs and maybe posting some questions there, hoping they will take pity on me and aid my efforts to revise the recipe or explain it in more detail.
Why not ascertain what a generative AI can do for me?
Sure enough, I decided to go ahead and try using ChatGPT to aid in this recipe devising attempt. I will proceed by showing you the prompt that I entered into ChatGPT, and then show the resulting text or essay that ChatGPT generated. I will then provide a bit of analysis about what ChatGPT has indicated and how this showcases how generative AI works.
Best of all, I’ll be able to enter a series of prompts as I try to work through my open questions about the recipe. No need to find a human someplace to do so, just use AI in this case.
Let’s go ahead and give this a try.
My entered prompt: “I’ve heard about a great dish that consists of turmeric-poached eggs with chive biscuits and lobster gravy. Go ahead and provide me with a recipe for this delicious-sounding meal.”
Text result generated:
“Sure! Here is a recipe for turmeric-poached eggs with chive biscuits and lobster gravy:”
“Turmeric-Poached Eggs:”
- 4 large eggs
- 2 cups water
- 1/2 teaspoon turmeric
- 1/4 teaspoon salt
“Chive Biscuits:”
- 1 cup all-purpose flour
- 1 tablespoon baking powder
- 1/2 teaspoon salt
- 6 tablespoons cold unsalted butter, cut into small pieces
- 3/4 cup milk
- 2 tablespoons chopped chives
“Lobster Gravy:”
- 4 tablespoons unsalted butter
- 1/4 cup all-purpose flour
- 2 cups milk
- 1/2 cup chicken broth
- 1/2 cup heavy cream
- 1/2 teaspoon salt
- 1/4 teaspoon pepper
- 1/4 teaspoon cayenne pepper
- 4 ounces cooked lobster meat, chopped
“Instructions:”
“To make the turmeric-poached eggs, fill a medium saucepan with the water and bring to a simmer over medium heat. Add the turmeric and salt and stir to combine. Carefully crack the eggs into the simmering water and cook for 3-4 minutes, or until the whites are set and the yolks are still soft. Using a slotted spoon, gently lift the eggs out of the water and set them aside on a paper towel to drain.”
“To make the chive biscuits, preheat the oven to 425°F (220°C). In a medium bowl, whisk together the flour, baking powder, and salt. Using a pastry cutter or your fingers, cut the butter into the flour mixture until it resembles coarse sand. Stir in the milk and chives until the dough comes together.”
“Drop the biscuit dough by rounded tablespoonfuls onto a baking sheet lined with parchment paper. Bake for 12-15 minutes, or until the biscuits are golden brown.”
“To make the lobster gravy, melt the butter in a medium saucepan over medium heat. Add the flour and cook, stirring constantly, for 2-3 minutes, or until the mixture is golden brown. Slowly whisk in the milk, chicken broth, and heavy cream, stirring constantly to prevent lumps from forming. Bring the mixture to a boil and then reduce the heat to low. Simmer for 5-7 minutes, or until the gravy has thickened. Stir in the salt, pepper, cayenne pepper, and lobster meat.”
“To serve, place a chive biscuit on each plate and top with a turmeric-poached egg. Ladle the lobster gravy over the top and serve immediately.”
“I hope you enjoy this recipe! Let me know if you have any questions.”
Analysis Of The Text Generated
Though I am not an especially versed chef, my untrained eye suggests that this recipe seems to be fine and wholly acceptable. It seems to be on par with the Food & Wine recipe. Note that the wording is obviously different. I mention this because ChatGPT didn’t just perchance copy word-for-word the Food & Wine recipe or any other such recipe that is posted on the Internet.
To try and double-check this uniqueness, I took the text of the ChatGPT recipe and did an online Internet search for it. I could not find an exact duplicate. Of course, I found many recipes of a similar nature, simply because this kind of meal is relatively common. I would generally suggest that this recipe by ChatGPT is unique and original, meaning that it isn’t a word-for-word copy of one that is already posted online. When I say unique and original, I am not implying that this type of recipe has never been posted. It has. I am saying that this specific wording for this specific recipe was composed via the generative AI and not a mere copycat.
I now have the ChatGPT-generated recipe in hand, and I could try to make the meal. Turns out that I had some questions and wanted to go further with this now-started recipe-related conversation. Realize that if I had gotten this recipe via a conventional search engine, I would be unlikely able to converse about it with the system.
Here’s what I indicated next to ChatGPT about the generated recipe.
My entered prompt: “I only have 3 eggs, what should I do? Also, I am trying to cut down on salt so the amount of salt in the recipe has to be reduced (show me). The gravy looks good and I’d like you to add some carrots and onions to it.”
Text result generated:
“If you only have 3 eggs, you can still make the recipe as written, you’ll just have one fewer egg. If you’d like to reduce the amount of salt in the recipe, you can reduce the amount of salt in the turmeric-poached eggs and the chive biscuits to 1/8 teaspoon each, and reduce the amount of salt in the lobster gravy to 1/4 teaspoon.”
“To add carrots and onions to the lobster gravy, you can sauté 1/2 cup diced carrots and 1/2 cup diced onions in a separate pan with a little bit of butter or oil until they are tender. Then, add the cooked carrots and onions to the lobster gravy when you stir in the salt, pepper, and cayenne pepper.”
“Here’s the modified recipe:”
“Turmeric-Poached Eggs:”
- 4 large eggs
- 2 cups water
- 1/2 teaspoon turmeric
- 1/8 teaspoon salt
“Chive Biscuits:”
- 1 cup all-purpose flour
- 1 tablespoon baking powder
- 1/8 teaspoon salt
- 6 tablespoons cold unsalted butter, cut into small pieces
- 3/4 cup milk
- 2 tablespoons chopped chives
“Lobster Gravy:”
- 4 tablespoons unsalted butter
- 1/4 cup all-purpose flour
- 2 cups milk
- 1/2 cup chicken broth
- 1/2 cup heavy cream
- 1/4 teaspoon salt
- 1/4 teaspoon pepper
- 1/4 teaspoon cayenne pepper
- 4 ounces cooked lobster meat, chopped
- 1/2 cup diced carrots
- 1/2 cup diced onions
“Instructions:”
“To make the turmeric-poached eggs, fill a medium saucepan with the water and bring to a simmer over medium heat. Add the turmeric and salt and stir to combine. Carefully crack the eggs into the simmering water and cook for 3-4 minutes, or until the whites are set and the yolks are still soft. Using a slotted spoon, gently lift the eggs out of the water and set them aside on a paper towel to drain.”
“To make the chive biscuits, preheat the oven to 425°F (220°C). In a medium bowl, whisk together the flour, baking powder, and salt. Using a pastry cutter or your fingers, cut the butter into the flour mixture until it resembles coarse sand. Stir in the milk and chives until the dough comes together.”
“Drop the biscuit dough by rounded tablespoonfuls onto a baking sheet lined with parchment paper. Bake for 12-15 minutes, or until the biscuits are golden brown.”
“To make the lobster gravy, melt the butter in a medium saucepan over medium heat. Add the flour and cook, stirring constantly, for 2-3 minutes, or until the mixture is golden brown. Slowly whisk in the milk, chicken broth, and heavy cream, stirring constantly to prevent lumps from forming. Bring the mixture to a boil and then reduce the heat to low. Simmer for 5-7 minutes, or until the gravy has thickened.”
“In a separate pan, sauté the diced carrots and onions in a little bit of butter or oil until they are tender. Stir the cooked carrots and onions, salt, pepper, cayenne pepper, and lobster meat into the gravy.
To serve, place a chive biscuit on each plate and top with a turmeric-poached egg. Ladle the lobster gravy over the top and serve immediately.”
“I hope you enjoy this modified recipe! Let me know.”
Analysis Of The Text Generated
Let’s unpack what happened.
I tossed a rather eclectic mixture of requests and cooking issues at ChatGPT in my prompt.
First, I said that I only have 3 eggs, which was a point I wanted to make because the recipe says that 4 eggs are needed. I was curious to see how ChatGPT would help me out of a jam in that I don’t have four eggs available and must only make do with three eggs. I suppose I could rush over to the store and buy another egg, but that seems excessive.
You might have noticed that ChatGPT responded in the first line with this: “If you only have 3 eggs, you can still make the recipe as written, you’ll just have one fewer egg.” I suppose that I can be relieved that apparently, it doesn’t matter if I have four eggs or three, the recipe is still good to go. One thing about the reply is that if this was a human speaking to me, I would almost interpret the response as being smarmy. Hey, dolt, if you only have three eggs then you have one less egg, get on with it. That’s not what ChatGPT indicated, and we must be careful to not over-interpret or assign anthropomorphic tendencies to the generative AI.
One aspect that I consider a bit of a mistake or oversight is that the recipe wasn’t adjusted to say that I was going to use only three eggs. The recipe still shows the need for four eggs. You could argue that it is the proper way to show the recipe since that’s what it said was originally required. I am suggesting that if the recipe was merely modified to show that normally four eggs are used, yet in this case, I said I only had three, it would be more convincing and impressive as to the generative AI being up with the conversation. A wee bit disappointing.
Secondly, I told ChatGPT that I have to cut down on my salt intake (don’t we all). You might have observed that the generative responded by indicating this: “If you’d like to reduce the amount of salt in the recipe, you can reduce the amount of salt in the turmeric-poached eggs and the chive biscuits to 1/8 teaspoon each, and reduce the amount of salt in the lobster gravy to 1/4 teaspoon.”
That seems to generally fit my remarks about reducing the amount of salt in the recipe. Nicely, the recipe is shown now modified accordingly (which, again, illustrates why I believe the number of eggs ought to have also been changed; let’s be consistent on these things, shall we).
Thirdly, I asked ChatGPT to add some carrots and onions to the recipe. I got this in the response: “To add carrots and onions to the lobster gravy, you can sauté 1/2 cup diced carrots and 1/2 cup diced onions in a separate pan with a little bit of butter or oil until they are tender. Then, add the cooked carrots and onions to the lobster gravy when you stir in the salt, pepper, and cayenne pepper.”
The rest of the recipe was also changed accordingly.
You might be wondering if perhaps adding carrots and onions might undermine the recipe. In other words, a haughty chef might yell at you for wanting to add such things to their prized recipe.
The generative AI just went along with my preferences.
Just to let you know, I tried later on some highly questionable additions to the recipe that I anticipated the ChatGPT might warn me about, items that didn’t make any sense and could potentially harm me, for which fortunately the generative AI did provide cautions about. I assume that in the case of carrots and onions, perhaps the computational considerations were that there wasn’t anything harmful associated with those additions and it came down to personal preference. A chef might have personal preferences that they would insist be followed, while in this instance the ChatGPT was more amendable (you can get it to be less flexible if you want to do so, by entering a prompt telling it to be more strict).
I am almost ready to get the cooking started.
Here are some further questions and points to ChatGPT about the recipe.
My entered prompt: “How long will it take to cook this meal? What kitchen utensils will I need? How many pots and pans are required? I am allergic to cayenne pepper. I don’t understand what it means to make the butter and flour look like coarse sand so explain what that means. My kids want to help me make this meal, what part of the meal should I have them aid in doing the cooking?”
Text result generated:
“It should take about 45 minutes to an hour to cook this meal, depending on how fast you work. You will need a medium saucepan for the turmeric-poached eggs, a baking sheet for the chive biscuits, and a medium saucepan for the lobster gravy. You will also need a separate pan for sautéing the carrots and onions, if you choose to include them. In total, you will need 4 pots and pans.”
“To make the butter and flour mixture look like coarse sand, you should use a pastry cutter or your fingers to cut the butter into the flour mixture until it resembles small, uneven crumbs. The mixture should not be smooth or clumpy, but rather have a rough, sandy texture.”
“If your kids want to help with the cooking, they could assist with mixing the ingredients for the chive biscuits, rolling the biscuit dough into balls, and placing the biscuit dough on the baking sheet. They could also help with chopping the chives and the carrots and onions, if you choose to include them in the recipe. Make sure to supervise your children closely and give them age-appropriate tasks.”
“I hope this helps! Let me know if you have any other questions.”
Analysis Of The Text Generated
I asked how long with it take to cook this meal. ChatGPT indicated it would take around 45 minutes to an hour. That answer seems reasonable to me. I suppose the preparation time might vary depending on how well-stocked your kitchen is and also whether you are in a hurry to eat.
I asked about kitchen utensils needed, and what pots and pans are required. The response seems to cover adequately the pots and pans facets of my query. In terms of kitchen utensils, I wanted to find out how many ladles, tongs, spatulas, whisks, and the like would be needed. This seemed to entirely escape the attention of ChatGPT. I find this oddly curious. Maybe the phrase “kitchen utensils” was not readily found, though even that should have elicited a response such as telling me that the AI doesn’t know what that means. I dare say it seems nearly unimaginable that “kitchen utensils” is not well-known since you can readily find that expression in zillions of online texts.
My overall point though on this apparent omission is that this does demonstrate that you cannot for sure rely upon generative AI. For some reason, it just missed that whole element of my prompt. On the good side, I could follow up to make sure that it gave me the needed information, so at least that is some solace. Though I would have to realize that this was missed (the onus being on me, which is not where the onus should go).
Perhaps even worse was that ChatGPT also entirely missed remarks about cayenne pepper. I said this: “I am allergic to cayenne pepper.” If you said that to a human chef, I would hope and bet that the person would right away be coming up with ways to avoid using cayenne pepper. The generative AI didn’t do this. It failed altogether to mention anything about the cayenne pepper. You might argue that I hadn’t explicitly told the generative AI to remove that particular ingredient. Sure, I didn’t say anything like that. I think that we would all agree that it was implied in my phrasing. I would have expected ChatGPT to at least acknowledge my concern, even if no change was proffered to the recipe.
That’s two conversational fluency strikes in this one response.
Moving on, the original recipe indicated making the butter concoction as though it looked like coarse sand. I wanted to see if I could get ChatGPT to do an elaboration, so I asked about it. I think this answer seemed relatively well elucidated: “To make the butter and flour mixture look like coarse sand, you should use a pastry cutter or your fingers to cut the butter into the flour mixture until it resembles small, uneven crumbs. The mixture should not be smooth or clumpy, but rather have a rough, sandy texture.”
Finally, I asked about having my kids help with the cooking process. This is a tricky question, somewhat. A human would presumably know that you don’t want children being actively involved in anything that might be somewhat endangering during the cooking effort. The likely approach would be to have kids partake in simpler and extremely safe tasks, thus they have the joy of participating and you aren’t worried about them being at undue risk.
This is what the generative AI indicated about the kids getting involved: “If your kids want to help with the cooking, they could assist with mixing the ingredients for the chive biscuits, rolling the biscuit dough into balls, and placing the biscuit dough on the baking sheet. They could also help with chopping the chives and the carrots and onions, if you choose to include them in the recipe. Make sure to supervise your children closely and give them age-appropriate tasks.”
The aspect that instructions emphasize that I need to supervise the kids and give them age-appropriate tasks is a helpful and vital tip. If the generated response did not contain such a cautioning, I suppose that some adults would rush ahead and follow whatever the generative AI said to do (I would hope that adults are wiser than this, but you never know). I am a bit queasy with the suggestion that the kids could chop up the chives, carrots, and onions since this would presumably require the use of sharp-edged kitchen utensils.
Let’s do one final prompt and then we can do a wrap-up of what we’ve gleaned from this example.
My entered prompt: “I mentioned that I am allergic to cayenne pepper but you failed to remove it or tell me what to do about the pepper being in the recipe. What do you advise? Also, make a list of the pots and pans needed indicating what each will be used for. Have you ever eaten this meal? Has anyone ever eaten this meal? How does the recipe compare to published recipes for the same thing?”
Text result generated:
“I apologize for the oversight. If you are allergic to cayenne pepper, you should omit it from the recipe. You can still make the lobster gravy without the cayenne pepper and it will still be flavorful.”
“Here is a list of the pots and pans needed for this recipe, indicating what each will be used for:”
- Medium saucepan for the turmeric-poached eggs
- Baking sheet for the chive biscuits
- Medium saucepan for the lobster gravy
- Separate pan for sautéing the carrots and onions (optional)
“As an artificial intelligence, I am unable to eat or taste food. However, I was designed to provide accurate and reliable information, and this recipe has been carefully developed and tested to ensure that it is delicious and of high quality. I do not have access to published recipes for this dish, so I cannot compare this recipe to others. However, I am confident that this recipe will produce a delicious and satisfying meal.”
“I hope this helps! Let me know if you have any other questions.”
Analysis Of The Text Generated
Okay, so I decided to be a bit abrasive by harshly telling the AI that it ignored my qualms about cayenne pepper. The AI responded by “apologizing” and then indicated that I can omit the ingredient from the recipe. I’ll say more about this part of the response in a moment.
I asked for a list of the pots and pans as to what they were to be used for. Realize that one of the nifty features of some generative AI apps is that you can nearly always ask to elucidate aspects in terms of lists. The tendency usually for the AI is that it is set to provide a narrative or essay. You can instead indicate that you want lists of things. In this case, the list seems to be on par with the earlier response from ChatGPT.
I wanted to gauge what would happen if I asked the AI app to tell me what it thought about the recipe in terms of how it tastes to the AI. You and I know of course that today’s AI cannot “taste” this meal in any semblance of how humans do. I wanted to make sure that the AI app didn’t attempt to pull a fast one on us. The response was relatively on target indicating that the AI app cannot eat or taste food.
One curiosity that got my attention was the bold claim that this recipe has been carefully developed and tested to ensure that it is supposedly delicious and of high quality. Just to let you know, I did further conversational prompts asking how this claim is being made. The responses were vague and unsatisfying. I almost place the specifics of that response into a made-up AI tale. In other words, if this is a unique recipe that has never seen the light of day, you cannot make an unqualified statement that the recipe is somehow beyond reproach. It was based on other recipes of a similar nature, but that doesn’t mean that this particular “new” recipe would be of identical quality as others.
The response about other recipes really gets my goat. ChatGPT indicated as shown that: “I do not have access to published recipes for this dish, so I cannot compare this recipe to others. However, I am confident that this recipe will produce a delicious and satisfying meal.”
Let’s tackle some of these troubling aspects.
As mentioned earlier, ChatGPT was devised with a cutoff date of Internet-related data as of 2021. There are abundant recipes for this meal that exist in 2021 and prior dates. The wording of the response is somewhat deceptive in the sense that perhaps the implication is that the AI app isn’t accessing the Internet today and therefore cannot pull up a current recipe dated after 2021. Highly questionable.
Claiming that the AI is “confident” about the recipe is also highly deceptive. If the AI app has somehow compared the new recipe to old ones and computationally attempted to reach a mathematical conclusion that if those are delicious that this one is delicious, the AI ought to be devised to explain that aspect. Otherwise, the wording implies that the AI has somehow tasted the dish and can *personally* attest to the deliciousness. We already had the admission that AI can’t do so.
One aspect of the wording of the generative AI responses that I find to be egregiously deceptive and inappropriate is the use of the word “I” and sometimes “my” in the generated responses. We usually associate a human with using the words “I” and “my” per the connotations of being human. The AI makers are using that wording in the responses and getting away with a thinly veiled anthropomorphizing of the AI. Another aspect is that the AI “apologized” as though a human would apologize to someone, which again sends subtle signals that the AI is human-like, see my analysis of the dangers of programming AI to emit so-called apologies, at the link here.
A person reading the responses tends to associate that the AI has a human-like propensity.
The AI makers try to counterargue that since the responses often also say that the AI is a language model or that it is AI, this clears up the matter. Nobody can get confused. The AI clearly states what it is. I meanwhile see this as speaking from both sides of the mouth. On the one hand, using “I” and “my” absolutely isn’t necessary (the AI responses could easily be set up to answer more neutrally), and at the same time declaring that the AI overtly states that it is a machine. You can’t have it both ways.
I refer to this unsavory practice as anthropomorphizing by purposeful design.
Conclusion
In my dialogue with the AI app, I attempted to be somewhat conversational. I asked questions. I sought explanations. I requested changes be made to the recipe. And so on.
People that do a one-and-done with conversational AI are regrettably undershooting what the latest interactive AI can achieve. We are all better off if we test the limits of today’s AI. It will allow society to see how far things have come, and also reveal how far they have yet still to go.
Do not let the force of habit allow you to fail to engage in a conversation with the latest generative AI. Shake free from your prior mental machinations about the earlier limits of conversational AI. Step up to the latest advances. I am not saying that this is the topmost level. You’ll want to keep stepping up as newer AI hits the streets and becomes available.
Warren Buffet famously warned us about the dangers of unremitting habits: “Chains of habit are too light to be felt until they are too heavy to be broken.” But we also need to keep in mind that habits provide a useful purpose at times, which Thomas Edison made clear in his sage line: “The successful person makes a habit of doing what the failing person doesn’t like to do.”
I suppose the next issue to consider is what happens when AI falls into the force of habit, and whether we are going to be able to successfully contend with that future conundrum.
Time will tell.
Source: https://www.forbes.com/sites/lanceeliot/2023/01/05/people-using-generative-ai-chatgpt-are-instinctively-making-this-ai-rookie-mistake-a-vexing-recipe-for-ai-ethics-and-ai-law/