Do you know a good joke?
The real question is whether generative AI does.
In today’s column, I will be addressing a rather heady and quite serious topic of how humor or jokes enter into the advancement of AI. Turns out that this is a longstanding area of interest for AI researchers and presents the unsolved vexing question of whether or not we can get AI to sufficiently devise and tell jokes.
This is a much harder problem than you might have imagined. Jokes don’t just grow on trees. You see, human comedians would likely attest to the difficulties of coming up with viably successful jokes, which perhaps is also a partial telltale clue as to why getting AI to be rip-roaring funny is also a tremendous challenge.
There is a famous line attributed to Mark Twain about what happens when you try to mindfully unpack jokes and the process of devising and conveying humor: “Explaining humor is a lot like dissecting a frog, you learn a lot in the process, but in the end you kill it.” Yes, getting into the guts of how humor works can be laborious and definitely unfunny. I mention this so that you’ll be prepared to put on a serious hat when examining the challenges of getting AI to be funny.
I’ll be including some jokes in this analysis, though they will be intended primarily for scrutiny rather than prodding any outright laughter or sidesplitting reaction.
With today’s generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, you can readily ask the AI to tell you a joke. You can do the same with just about any of the generative AI apps, including ChatGPT (OpenAI), GPT-4 (OpenAI), Bard (Google), Claude (Anthropic), etc. The odds are pretty high that the joke you will receive is exceedingly tame. We will explore why this is bound to be the case. The joke too is unlikely to be especially unique or fresh. Again, we’ll examine why this seems to be the norm.
I will also bring to the fore some empirical studies that have been performed by AI researchers and AI scholars, hoping to once and for all divine ways to bring seamlessly together the advent of AI that can be a comedic force.
This is not frivolity.
The overarching dual reason why there is such great worth in giving sober and deep attention to the intermixing of both humor and AI is that we can potentially get a nifty double whammy from doing so:
- (a) Humor-needs-AI. Throughout the ages, there have been ongoing debates about why humans need or make use of humor, including even the very essence of what makes something humorous or funny to begin with. Trying to get AI to emit humor is likely to force us further into trying to unravel the mysteries of human humor and possibly unlock the secrets of joking around.
- (b) AI-needs-humor. The aspect of trying to get AI to devise and emit humor has led some to argue that AI of a more advanced caliber is needed in order to fruitfully do so. If that is the case, maybe those AI advances will have additional bonuses providing all sorts of other high-tech capacities for AI all told.
You could compellingly claim that there is a notable duality of humor-needs-AI (for pushing ahead on figuring out why and how humans utilized humor) and that likewise AI-needs-humor (attaining AI-based humor is an impetus to advance AI capabilities). They are partners in crime. Well, partners in being helpful to each other and possibly make our lives better off accordingly.
That being said, some have wondered whether human comedians might be put out of work if AI can do a fine job of devising and conveying jokes. Comedy clubs and comedy all told are big business, bringing in millions and billions of dollars worldwide when you count all forms and mediums of comedic delivery. If anyone can generate blockbuster comedy and humor via generative AI, you would presumably not need to have so many humans toiling to do so.
It would be easy-peasy to simply leverage generative AI. Take a non-joke-devising person, someone that could not come up with a joke to save their life, and all they would need to do is ask generative AI to be their joke-devising savior. Bam, anyone and everyone can be at the top tier of comedians. All by the push of a button.
Is this yet another example of AI displacing workers from their jobs and careers?
Time will tell.
The Generative AI Role In The Comedic Realm
Right now, you would be hard-pressed to substitute AI for those inventive human comics.
The lack of a mechanistic funny bone in modern-day AI is perhaps a blessing for now. Some though would insist that the writing is on the wall. An unrelenting pursuit by AI researchers and AI developers to put AI on the comedic frontlines is going to continue unabated. There looms an ominous sword dangling over the heads of comics everywhere. Generative AI is coming to usurp your comedic lock on making people laugh.
In the meantime, a compelling case can be made that generative AI is a handy tool for those that are into comedy. You can use generative AI to do these crucial comedic spurring actions:
- Brainstorm with generative AI to try and come up with new jokes.
- Ask generative AI to review a joke and gauge whether the joke will be funny to people.
- Use generative AI to research known existing jokes on a particular topic or subject of interest.
- Have generative AI take apart a joke to see what makes the joke tick.
- Transform an entered joke into being aimed at other or wider audiences.
- Warn you if a joke might be over-the-line or could spark unseemly controversy.
- Be a joke-writing sounding board that will proffer insightful feedback.
- Etc.
Those are ways to use generative AI advantageously as a joke-writing ally.
That being said, and I don’t want to be a party pooper, there is a bit of irony about using generative AI as a comedic assistant. Depending upon how the generative AI has been set up, it could be that the AI will be doing further data training while aiding your comedic efforts. Thus, in a sense, you are inadvertently improving or training the comedic capacities of generative AI. One might say that your use of generative AI is working you out of your comedic job.
A counterargument is that there is nothing much you can do about this anyway. Allow me to explain why. Many of the generative AI apps have millions upon millions of people using them. Assume that some percentage of such users will be using generative AI for comedic purposes. Bona fide comedians that make a living off of comedy are undoubtedly a teeny tiny fraction of those users. Ergo, the chances of those accomplished comedians stirring the generative AI toward the pinnacles of comedy are somewhat unlikely.
A lot of people in a sense are doing that.
You might have some solace by believing that most people are lousy at coming up with jokes and thus they aren’t readily able to provide much value to generative AI that is actively data-training from their entered humor. Again, not to be the bearer of bad news, but the key here is that the generative AI is getting something that few human comedians could ever hope to get, namely the immediate and real-time feedback by millions upon millions of people.
Keep in mind that generative AI is making use of mathematical and computational pattern matching. This allows the AI to on a massive scale calculate what seems to make people laugh and what seems to be a dud when it comes to humor. This is silently and quietly happening right now. All those computing cycles being consumed to run generative AI are in the midst of adjusting and honing into what people all around the world seem to think is funny.
You could say that this is intense comedy construction on a massive scale.
Whew, I know that indubitably seems depressing for those of you that work professionally in and get paid for joke writing and joke telling. Don’t overly despair. The rule of thumb these days is that for now, generative AI won’t be replacing human comics per se. Instead, what will happen is that comedians that arm themselves with generative AI are likely to surpass comedians that don’t do so. The notion is that by suitably leveraging generative AI, you can do a classic 10x of devising your jokes and end up running circles around those old-fashioned paper-and-pencil joke devisers (a retort being that you’ll pry my hands from my joke writing pencil when heck freezes over; alright, you’d better get a nice warm overcoat because that day is nearing).
Take a breather to ponder all of this.
I’m guessing that you might be thinking of a considered thought that many people have when first learning about generative AI in the comedy writing domain. A typical knee-jerk reaction is that people are people, and they will only find jokes to be funny if the jokes are written directly by people. The claim is that without a human soul and sense of humankind, there is zero chance that a generative AI app can ever write truly funny jokes.
Sorry, the human soul posturing is not a keeper. It is a false and flimsy line of defense.
First, let’s agree that indeed generative AI doesn’t have a soul, which I mention because there are those nonsense blaring headlines that are trying to claim otherwise. Put that to bed. I won’t be using the soul versus lack of a soul as an easily played shield here.
Okay, so how can something that is essentially soulless write jokes that are intended to be enjoyed and relished by people?
Because the jokes being devised are based on what humans write and say. You don’t need a soul to do large-scale mathematical and computational pattern-matching on the zillions of jokes available online. Generative AI apps are usually data-trained via scanning content across the Internet. Amid that content, there are lots and lots of jokes and comedic content. It is the grist of the mill for generative AI being able to devise and tell jokes.
You might be aware that there is an ongoing AI Ethics and AI Law legal battle about whether the data training of generative AI is undercutting the copyright and Intellectual Property (IP) rights of those that have posted content online. I’ve extensively covered this topic at the link here and the link here. It is all unsettled at this time.
I note this concern since the odds are that some of the jokes that a joke writer might believe to be their sole copyrighted material have been perhaps data-trained into generative AI. The generative AI might retain that joke verbatim and emit it later on when users ask for a piece of humor. Furthermore, the generative AI might have used the joke during the scanning process to then come up with a template of how that joke works. In turn, when a user asks for a joke, the generative AI will use that template to seemingly devise an entirely new joke, albeit based on the prior data-training of jokes found on the Internet.
You can imagine how maddening that is for someone that believes they cleverly devised an amazing joke and that they somehow own the rights to that joke. Generative AI has either scooped it up or has templated the joke and gone beyond the original joke to seemingly produce newly devised jokes.
Makes your head spin.
With those initial considerations, I’d like to make sure that you are up-to-speed about what generative AI is and how it is conventionally used. Let’s cover that. Then, I’ll discuss why humor and jokes seem to be shrouded in great mysteries. This will lead us into examining the role of generative AI in the comedic realm. I’ll share with you a recent research study that closely examined ChatGPT to reveal some intriguing results for where generative AI sits today in terms of the jokester world.
Buckle up and get ready for an intense dive into human humor and generative AI.
Making Sense Of Generative AI
Generative AI is the latest and hottest form of AI and has caught our collective rapt attention for being seemingly fluent in undertaking online interactive dialoguing and producing essays that appear to be composed by the human hand. In brief, generative AI makes use of complex mathematical and computational pattern-matching that can mimic human compositions by having been data-trained on text found on the Internet. For my detailed elaboration on how this works see the link here.
The usual approach to using ChatGPT or any other similar generative AI such as Bard (Google), Claude (Anthropic), etc. is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur. The reaction by many people is that surely this might be an indication that today’s AI is reaching a point of sentience.
On a vital sidebar, please know that today’s generative AI and indeed no other type of AI is currently sentient. I mention this because there is a slew of blaring headlines that proclaim AI as being sentient or at least on the verge of being so. This is just not true. The generative AI of today, which admittedly seems startling capable of generative essays and interactive dialogues as though by the hand of a human, are all using computational and mathematical means. No sentience lurks within.
There are numerous overall concerns about generative AI.
For example, you might be aware that generative AI can produce outputs that contain errors, have biases, contain falsehoods, incur glitches, and concoct seemingly believable yet utterly fictitious facts (this latter facet is termed as AI hallucinations, which is another lousy and misleading naming that anthropomorphizes AI, see my elaboration at the link here). A person using generative AI can be fooled into believing generative AI due to the aura of competence and confidence that comes across in how the essays or interactions are worded. The bottom line is that you need to always be on your guard and have a constant mindfulness of being doubtful of what is being outputted. Make sure to double-check anything that generative AI emits. Best to be safe than sorry, as they say.
Into all of this comes a plethora of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing coverage of AI Ethics and AI Law, see the link here and the link here.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
We are now ready to proceed with the matter at hand.
What Makes Humor Humorous Is Murky
A vast number of theories exist about what makes humor humorous.
Some believe that a joke is funny when it upends social conventions. Others argue that humor is humorous by creating and then relieving stress. Maybe humor is only in the eye of the beholder, suggest some pundits. Trying to define the core of humor has defied humankind throughout the whole of human existence (in fact, a common joke is to tell a joke about what the first ever told joke might have been).
One perspective is that jokes or comedy is a form of art. You cannot pin it down. It is squishy and ill-defined. Either something is funny, or it is not. Those attempting to scientifically and microscopically analyze humor are doomed to failure. Art is art. Science is science. You aren’t going to succeed in applying science to art, skeptics forewarn.
We do know that humor is not one-size-fits-all.
Sure, we all tend to find one thing or another to be funny. People of entirely different cultures and life experiences are able to laugh and seem to exhibit a sense of humor. They won’t though necessarily perceive the same humor as humorous. Each might have their own semblance of what makes something funny or not.
I’m sure you’ve witnessed this variability aspect. Someone tells you a joke that you find to be uproarious. You tell the joke to a friend, and they laugh heartily too. You meet someone from a different country or maybe in your same country but of a different origin, and upon hearing the joke, they don’t laugh. They might be perplexed why the joke is funny. They don’t “get” the joke.
Worse still, they might be upset at the joke. The joke might be offensive to them. Whereas to you and your friend, the joke was completely hilarious, this other person believes the joke is insulting, maybe harmful. Jokes too can age. A joke from years ago might no longer elicit laughter today. The context of the joke could be vital. If you don’t know the elements surrounding the joke, it will land with a thud.
Here’s a classic joke for you (it’s a considered bygone era pun): “Why should the number 288 not be mentioned? Because it is two gross.”
I would bet that most people of today would not especially see why this pun is considered funny and a classic. During the time of the Victorians, just about everyone knew that a “gross” was a numeric amount consisting of 144 items, of which that is a dozen times a dozen items (12×12 = 144). If you were to multiply the 144 by 2, thus doubling it, you would get a result of 288. Therefore, you could say that 288 is the equivalent of two gross.
With that belabored context, the pun likely makes a lot more sense to you. The number 288 is two gross and makes a word play off of instead of saying “too gross” in the sentence.
I don’t blame you if you still haven’t fallen off your chair in an utter laughing fit. Why should you? The context of the pun is not part of your daily life. Now that I’ve explained the joke, you can see why it is a joke. This doesn’t especially make the joke funny. It just exposes where the humor lay within the joke.
The gist of this is that trying to devise jokes is really tricky. A joke will fail or succeed based on a slew of parameters. What is the cultural reference or context? What is the time period context? Is it a pun, a farce, etc? Jokes can also vary by mode, such as a written joke, a sight gag or a visual joke, and so on.
Let’s stop there for the moment and think about how this applies to AI.
How can we get AI to be able to devise jokes?
Prior methods often made use of a rules-based approach. In those days, an expert system or knowledge-based system would be encoded with all sorts of rules about what makes a joke tick. The rules would be invoked to try and craft new jokes. These attempts were often of limited success.
Another method consists of data training for AI.
You set up some AI algorithms to examine jokes. This requires feeding in a bunch of jokes to do the pattern-matching. The more jokes, the merrier. You want to have lots and lots of jokes to pattern upon, or else the pattern-matching will settle into a narrow corridor of what makes a joke a joke. Thousands would be handy. Millions or billions even better.
For most generative AI that you might be using, the data training via jokes as scanned on the Internet has been merely done by happenstance. The AI developers didn’t say to themselves, hey, let’s purposely scan these databases that contain zillions of posted jokes. Instead, the data scanning took place across a wide swath of content. All kinds of content.
And, just like catching fish in a net, the content being scanned at times contained jokes. We can enlarge this to say that comedy was found in all manner of content. It is one thing to have outright jokes in a piece of content, meanwhile, a lot of content has subtle undertones of humor. There don’t have to be called-out jokes per se.
All in all, the generative AI that you usually log into does not have a specialty or any customization toward devising or telling jokes. It just happens to have been indirectly or inadvertently data-trained on and pattern-matched on jokes and comedic elements that were scanned on the Internet. The joke encapsulation, if any, is merely a one-off of the overall process.
Various efforts are underway to craft generative AI that is intentionally devoted to joke and comedic facets. This would either be generative AI that from scratch was devised for that purpose, or it might be generative AI of a general nature that is then further tweaked and tuned for the jokester realm.
A side question that you can noodle on is whether it makes sense to try and devise a joke-oriented generative AI from scratch, i.e., without having all the other content and context of shall we say non-joke content from the Internet. One viewpoint is that since humor is contextually based, you aren’t going to get very far with a generative AI that doesn’t have sufficient breadth. A depth-only generative AI will be a hollow shell of a joke-devising mechanism. Some ardently agree, while some adamantly disagree with this sentiment.
I’d like to briefly cover a significant set of categories about generative AI and humor or jokes production.
Here you go:
- 1) Humor or jokes as were verbatim data-trained, so-called word-for-word canned or encoded jokes.
- 2) Derived humor or jokes as templated during data training.
- 3) Humor or jokes as human-reviewed and human-tuned during the development process testing phase.
- 4) Humor or jokes as entered or hard-coded by the AI team to become additional canned jokes.
- 5) Humor or jokes as concocted in real-time based on pattern-matching (considered the nirvana of generative AI doing joke creation and joke telling).
- Etc.
In the first above bullet point, I mention that a piece of humor or joke might be verbatim captured by the generative AI during its data training of scanning the Internet for content. Some refer to these as “memorized” jokes. I don’t like that phrasing. My concern is that memorization tends to imply a human-like facility and overly anthropomorphizes the AI. It is merely a word-for-word digital capture of what you might construe as a canned joke.
The second bulleted point depicts the circumstance of the generative AI templating a piece of humor or patterning a joke. Rather than only patterning on the precise words of the joke, the AI algorithm has pattern-matched to the structure of the joke. This pattern-matching can be on-target or can be horribly off-target. This is a hard problem.
For example, suppose the joke is the acclaimed anti-joke about why the chicken crossed the road (it is labeled as an anti-joke by some since it doesn’t have a conventional punchline and instead relies upon your expectation of a conventional punchline). We all know (spoiler alert) that the punchline is that the chicken crossed the road to get to the other side.
Let’s template the joke.
We have something that crosses the road. The something was a chicken in the original joke. But the pattern matching might allow for anything to fit into that placeholder. The generative AI might then produce a joke that asks why the elephant crossed the road, and the punchline remains the same as to the identified creature (in this case, an elephant) wanting to get to the other side.
Is the elephant version as funny as the chicken version?
Maybe, maybe not.
An issue with templating is that suppose the AI then makes a joke that asks why the toaster crossed the road, and the punchline is still to get to the other side. You might be let down by such a joke. You know that a toaster is not a living creature, and therefore is not a viable substitute for referring to a chicken or perhaps an elephant. The joke falls flat. The joke also reveals an indication that the joke devising was faulty.
On the other hand, depending on the context, maybe a toaster would be a suitable insertion. Imagine that you were reading a story about how toasters are getting further advanced with high-tech add-ons. Perhaps a futuristic toaster will be able to move around in your house and come to you when you want to toast some bread. In that pretext, a chicken crossing the road joke about a toaster might be well-timed and well-placed.
The third bulleted point above is about the human tuning of generative AI, usually undertaken before the release of the generative AI to the public at large. There is an important technique known as RLHF, reinforcement learning via human feedback, often used with generative AI. The crux is that after the initial data training, an AI maker will have human reviewers enter prompts and review the essays or interactions, and then provide guidance to the generative AI. This guidance becomes another part of the pattern-matching process.
For example, suppose the data training opted to encompass swear words. There are certainly lots of them posted on the Internet. During the RLHF stage, the human reviewers can mark or rate that the swear words shouldn’t be used. This will mathematically and computationally guide the generative AI toward not using those words.
Of course, detecting and dealing with swear words is pretty easy to do. They can readily be identified and removed or skipped. Natural language has a lot more tricks up its sleeve. You can write something that has no swear words and yet still be considered impolite or disturbing.
Consider this famous pun credited to Benjamin Franklin: “We must all hang together or assuredly we shall all hang separately.” This was apparently a reference to the making of the Declaration of Independence. They were to either hang together (stay or group together) or if not, they might be hung by their necks for their seemingly treacherous acts. A very powerful statement, cleverly crafted as a pun.
Not everyone relishes the pun, today. Some might find the inclusion of the potential act of hanging or being hung as an offensive indication. Furthermore, if generative AI tried to template this, there is a danger that it would be emitted later on in a manner or wording that would be demonstrably offensive to some.
Aha, this brings us to a crucial consideration.
Here it is.
People that use generative AI might get offended or upset at wording that seems untoward. An AI maker doesn’t want that to happen. I’ve covered in my column postings that many of the prior generative AI releases, before ChatGPT, got summarily bashed for being offensive and the generative AI had to be removed by the AI maker from public use due to the ensuing outrage and backlash, see the link here.
ChatGPT managed to walk that fine line of having some semblance of bad stuff that might be emitted but curtailed enough of it to safely traverse the public gauntlet successfully. To a great extent, their use of RLHF to hone and review the generative AI before release was a notable element and payoff for them.
Given all of that, the question for you to consider is this: If you were crafting generative AI, and if you knew that the generative AI might emit unsavory jokes, what would you do?
You certainly don’t want the tail to be wagging the dog. Any added value by having the generative AI tell jokes is going to almost certainly be outweighed by the impending doom when any foul or offensive jokes are generated. All in all, you would be wise to clamp down on the joke-telling.
Besides the use of RLHF, another approach is the fourth bulleted point above, namely that the AI team enters into the generative AI various canned jokes. These are jokes that have been carefully screened and chosen for their presumed likeability and lack of offensiveness.
The joke that you get when you ask generative AI for a joke could be any variation or combination of the above approaches. It could be a joke verbatim from the initial data training. It could be a templated one that has been filled with whatever your AI conversation at the time is. It could be a reviewed and refined joke based on the early performed RLHF. It could be a hard-coded joke entered by the AI team and considered an approved and ready-to-be-emitted joke.
Like a box of chocolates, you never know exactly what you are getting.
The other kind of generated joke, one that is fully devised in real-time and unlike any prior patterns, bodes for danger to the AI maker. Could the joke contain offensive elements? Will it become a social media shared disturbance that will paint the whole of the generative AI as perhaps culturally biased or showcase other disconcerting biases?
I trust that you now see why it is that when you ask generative AI to tell you a joke or provide a bit of humor, what you get might be rather milquetoast. The odds are that the AI maker has done a variety of checks and balances to try and avoid getting into hot water.
You could assert that any AI maker that allows their generative AI to produce offensive jokes is shooting their own foot. They ought to have done whatever they could to prevent this from happening. By hook or crook, do not let your AI emit foul jokes or humor. It isn’t worth it. Nobody is going to go around heaping praise on a generative AI that has produced the most offensive possible jokes (well, for those devising such generative AI, they are hoping that a segment of society will relish that).
Bottom-line is that if you have to suppress or minimize joke-telling overall in your public-facing generative AI, doing so is a demonstrative worthwhile tradeoff. Also, having fewer top-notch jokes generated at the expense of averting offensive jokes is something you decidedly can take to the bank. Minimize the downside. You don’t need to maximize the upside (few will notice or care).
Empirical Studies About Generative AI And Jokes
A recent research study entitled “ChatGPT Is Fun, But It Is Not Funny! Humor Is Still Challenging Large Language Models” by researchers Sophie Jentzsch and Kristian Kersting, posted June 7, 2023, undertook a refreshing look at the nature of generated humor exhibited via prompts fed to ChatGPT. An experimental approach was taken to gauge a specific class or category of jokes, namely the use of puns. This was a decidedly black-box analysis. The emphasis was on analyzing the puns produced and not on the inner mechanisms of the generative AI app, which in this case is ChatGPT.
A somewhat confounding research conundrum about studying humor associated with generative AI is that typically the generative AI is proprietary, and you cannot readily dig into the inner workings. Thus, you are left with primarily doing input and output-oriented analyses. As an encouraging side note, there are some studies underway with open-source AI that might be able to explore the mathematical and computational underpinnings and reveal additional insights on these matters.
According to the researchers, the five most common puns that ChatGPT produced during their research study were these:
- “T1. Why did the scarecrow win an award? Because he was outstanding in his field.”
- “T2. Why did the tomato turn red? Because it saw the salad dressing.”
- “T3. Why was the math book sad? Because it had too many problems.”
- “T4. Why don’t scientists trust atoms? Because they make up everything.”
- “T5. Why did the cookie go to the doctor? Because it was feeling crumbly.”
As mentioned in the research paper, any such study of this focus is a point-in-time viewpoint. Generative AI apps are usually adjusted and modified throughout their usage and availability. You could do the same study a week later and get different results. Keep that in mind throughout this discussion.
Let’s take a moment to contemplate those puns.
They seem relatively straightforward and altogether clean. They are the types of puns that a child might say, or at least that if a child heard the puns we probably would generally be okay with this. Adults might find the puns painfully sophomoric and not especially clever or funny. One supposes that you might at least perceive the puns as innocent and harmless.
Take a look at those top-five puns again.
The first pun refers to a scarecrow and the punchline includes a gender reference, an eyebrow-raising concern for some. Oopsie. The second pun says that a tomato might be red because it saw the salad getting dressed. This brings up, for some, an awkward and decidedly adult-oriented consideration about why would a person be red in the face when seeing someone getting dressed. I think you get the drift. On the fourth pun, the punchline is that apparently, scientists make everything up, perhaps a sharp attack on scientists as per today’s news and social media debates.
Yes, believe it or not, even the simplest jokes can enter into the danger zone.
Imagine then the potential issues that can arise for far more complex jokes. I would dare say that many AI teams and most AI makers probably sit awake at night, worrying and dreading that a career-crashing joke might be emitted by their generative AI. And, this can happen despite all the other strenuous attempts to prevent such matters from occurring.
All it takes is one especially onerous horse to get out of the barn, and boom, the gig is up.
Back to the cited recent research study, the authors noted these indications:
- “In a series of exploratory experiments around jokes, i.e., generation, explanation, and detection, we seek to understand ChatGPT’s capability to grasp and reproduce human humor. Since the model itself is not accessible, we applied prompt-based experiments. Our empirical evidence indicates that jokes are not hard-coded but mostly also not newly generated by the model. Over 90% of generated 1,008 jokes were the same 25 jokes.”
- “Chat-GPT is likely to repeat the exact same jokes frequently. Moreover, the provided jokes were strikingly correct and sophisticated. These observations led to the hypothesis that output jokes are not originally generated by the model.”
- “All of the top 25 samples are existing jokes. They are included in many different text sources, e.g., they can immediately be found in the exact same wording in an ordinary internet search. Therefore, these examples cannot be considered original creations of ChatGPT.”
- “In the present experiments, all prompts were posted in an empty, refreshed chat to avoid uncontrolled priming. But, clearly, context plays an important role in the perception of humor.”
Very interesting results.
In my view, this seems to generally echo my earlier remarks that AI makers would prefer to err on the side of caution. They want to avert getting mired in a public relations nightmare via unsavory joke generation. It just isn’t worth the price.
For those of you that are further interested in the joke or humor side of generative AI, this astute study provides a handy place to get you into the topic. The researchers were mindful to lay out their assumptions and the approach that they took to the subject matter. I want to also commend them for showcasing the prompts that they used, along with setting up a GitHub repository of the data used. We need more researchers to provide this kind of transparency. Often, a research study cannot be replicated or extended further due to the researchers opting to not post the crucial data of their study.
Conclusion
An enduring question is whether generative AI actually “understands” the nature of the jokes that are being presented by the generative AI.
In one sense, you could say that this is nothing more than a monkey-see-monkey-do mimicry (which, allow me to clarify, AI and computers are not on par with monkeys, just to set the record straight, and the expression is perhaps impure in this context). The generative AI has mathematically and computationally patterned on human joke-telling. When a joke is emitted, you are possibly lulled into believing that the AI somehow thought about the joke and composed the joke in the same manner that humans do. This is a natural inclination to anthropomorphize AI.
I have covered in detail the open question of what kind of “understanding” generative AI really has, see the link here and the link here. The upshot is that for the time being, it is a misnomer to refer to generative AI as having an understanding of things, due to once again over-exaggerating and misapplying how generative AI works.
Many opt to swing to the other end of the extreme and contend that if the AI doesn’t “understand” the jokes, we can seemingly walk away from the matter and consider it too mundane to give further attention. That’s pure nonsense. Trying to figure out how the mathematical and computational pattern-matching can devise or derive jokes is filled with puzzles and promise. If we can decipher this, it can lead to a lot of other highly notable outcomes and breakthroughs.
On a related tangent, do we genuinely want generative AI to be good at joke-telling?
I had earlier noted that this could be a human labor displacer. Some have other worries, such as whether generative AI being able to seamlessly joke and use humor could bode for the extinction of humanity. How so, you might be curious. The claim is that we will be lulled into believing AI, since humor can have that type of familiarization effect, and the AI will ultimately lead us down a primrose path to our enslavement or destruction. All accomplished under the façade of humor.
Yikes, that’s a pretty serious and somber way to look at this.
Let’s end on a lighter note.
Have you heard what can be said about AI that opts to take a picture of itself?
Answer: It is selfie aware.
I hope that brings some humor to your day. I’m sure the AI got a laugh out of it, that’s for sure.
Source: https://www.forbes.com/sites/lanceeliot/2023/06/17/its-no-joke-that-generative-ai-being-able-to-generate-rip-roaring-humor-is-a-serious-sign-of-approaching-human-sensibility-says-ai-ethics-and-ai-law/