I’m guessing that by now you’ve heard about or perhaps seen blaring news headlines or social media postings touting the hottest and latest use of AI that generates seemingly human-written text-oriented narratives via an AI application known as ChatGPT.
If you haven’t heard or read about this new AI app, don’t worry, I’ll be bringing you up to speed.
For those of you that are already aware of ChatGPT, you might find of keen interest some of my herein insider scoops about what it does, how it works, and what to watch out for. All in all, nearly anyone that cares about the future all told is going to inevitably want to discover why everyone is agog over this AI application.
To clarify, rampant predictions are that this type of AI is going to change lives, including the lives of those that don’t yet know anything about ChatGPT or any other such AI capabilities. As I will momentarily explain, these AI apps are going to have rather widespread repercussions in ways that we are only starting to anticipate.
Get yourself ready for the roller coaster ride known as Generative AI.
I will start with some key background about generative AI and use the simplest scenario which involves AI that generates art. After taking you through that foundation, we’ll jump into generative AI that generates text-oriented narratives.
For my ongoing and extensive coverage of AI overall, including AI Ethics and AI Law, see the link here and the link here, just to name a few.
Generative AI That Produces Generated Art
I refer to this type or style of AI as being generative which is the AI aficionado terminology being used to describe AI that generates outputs such as text, images, video, and the like.
You might have noticed earlier this year that there was a big spate about being able to generate artsy images by entering a line or two of text. The idea is pretty simple. You make use of an AI app that allows you to enter some text of your choosing. For example, you might type in that you want to see what a frog with a hat on top of a chimney would look like. The AI app then parses your words and tries to generate an image that generally matches the words that you specified. People have greatly enjoyed generating all manner of images. Social media became clogged with them for a while.
How does generative AI do the generation aspects?
In the case of the text-to-art style of generative AI, a slew of online art was pre-scanned via computer algorithms and the elements of the scanned art were computationally analyzed for the components involved. Envision an online picture that has a frog in it. Imagine another separate image that has a chimney in it. Yet another picture has a hat in it. These components are identified computationally, sometimes done without human assistance and sometimes via human guidance, and then a kind of mathematical network is formulated.
When you come along later and ask to have an artwork generated that has a frog with a hat on a chimney, the AI app uses the mathematical network to find and piece together those elements. The resultant art image might or might not come out the way that you hoped. Perhaps the frog is an ugly looking one. The hat might be a large stovepipe hat but you were wishing for a slimmer derby-style hat. Meanwhile, the frog image is standing on the chimney though you were seeking to have the frog seated instead.
The nifty thing about these kinds of AI apps is that they usually allow you to repeat your request and also add additional specifications if you wish to do so. Thus, you might repeat your request and indicate you want a beautiful frog with a derby hat that is sitting on a chimney. Voila, the newly generated image might be closer to what you wanted.
Some have wondered whether the AI is merely regurgitating precisely whatever it was trained on. The answer is no (usually). The image of a frog that the AI showcases for your request is not necessarily an exact duplicate of an akin image that was in the training set. Most of these generative AI apps are set up to generalize whatever images they originally find. Think of it this way. Suppose you collected a thousand images of frogs. You might opt to gradually figure out what a frog seems to look like, mushing together a thousand images that you found. As such, the frog that you end up drawing is not necessarily precisely like the ones you used for training purposes.
That being said, there is a chance that the AI algorithm might not do as much generalizing as might be so assumed. If there are unique training images and no others of a like kind, it could be that the AI “generalizes” rather close to the only specific instance that it received. In that case, the attempt by the algorithm to, later on, produce a requested image of that nature could look notably similar to whatever was in the training set.
I’ll pause for a moment to proffer some thoughts related to AI Ethics and AI Law.
As mentioned, if the generative AI is trained on the Internet, this means that whatever has been posted publicly on the Internet is possibly going to be utilized by the AI algorithm. Suppose then that you have a nifty piece of art that you labored on and believe that you own the rights to the art piece. You post a picture of it online. Anyone that wants to use your artwork is supposed to come to you and pay you a fee for that usage.
You might already be sensing where this is headed.
Hang in there for the dour news.
So, a generative AI app that is getting trained via broadly examining content on the Internet detects your wonderous piece of art. The image of your artwork gets absorbed into the AI app. Characteristics of your artistry are now being mathematically combined with other scanned artworks. Upon being asked to generate a piece of art, the AI might leverage your piece when composing a newly generated art image. Those people garnering the art might not realize that in a sense the art has your particular fingerprints all over it, due to the AI algorithm having imprinted somewhat on your masterpiece.
There is also a chance that if your artwork was extraordinarily unique, it might be reused by the AI app in a greater semblance of showcasing the artistry. As such, sometimes your artwork might be barely recognizable in some newly generated AI artwork, while in other instances it could be that the generated artwork is nearly a spitting image of what you divined.
It is timely then to bring AI Ethics into this scenario.
Is it ethically proper or appropriate that the generative AI has generated artwork that has similarities to your art?
Some say yes, and some say no.
The yes camp, believing that this is ethically perfectly fine, would perhaps argue that since you posted your artwork online, it is open to whomever or whatever wants to copy it. Also, they might claim that the new art isn’t a precise copy of your work. Thus, you cannot complain. If we somehow stopped all reuse of existing art we would never have any kind of new art to look at. Plus, we could presumably get into a heated debate about whether or not your particular artwork was being copied or exploited – it could be some other artwork that you didn’t even know existed and was in fact the underlying source.
The no camp would strongly insist that this is abundantly unethical. No two ways about it. They would argue that you are getting ripped off. Just because your artwork is posted online doesn’t mean that anyone can come along and freely copy it. Perhaps you posted the art with a stern warning to not copy it. Meanwhile, the AI came along and stripped out the art and completely skipped past the warnings. Outrageous! And the excuse that the AI algorithm has generalized and isn’t doing the nitty gritty of precise copying seems like one of those fake excuses. It figured out how to exploit your artistry and this is a sham and a shame.
What about the legal aspects of this generative AI?
There is a lot of handwringing about the legal particulars of generative AI. Do you look to federal laws about Intellectual Property (IP) rights? Are those strident enough to apply? What about when the generative AI is cutting across international borders to collect the training set? Does the artwork generated by the AI fit into the various exclusionary categories associated with IP rights? And so on.
Some believe that we need new AI-related laws to contend specifically with these kinds of generative AI situations. Rather than trying to shoehorn existing laws, it might be cleaner and easier to construct new laws. Also, even if existing laws apply, the costs and delays in trying to bring legal action can be enormous and inhibit your ability to press ahead when you believe you have been unfairly and illegally harmed. For my coverage of these topics, see the link here.
I’ll add an additional twist to these AI Ethics and AI Law considerations.
Who owns the rights to the generated output of the AI?
You might say that the humans that developed the AI should own those rights. Not everyone concurs with such a contention. You might say that AI owns those rights, but this is confounded by the fact that we generally do not recognize AI as being able to possess such rights. Until we figure out whether AI is going to have legal personhood, things are unsure on this front, see my analysis at the link here.
I trust that you have a semblance now of what generative AI does. We can next proceed to consider the use case involving generating text-based narratives.
Generative AI That Generates Text-Based Narratives
Now that we’ve discussed the use of generative AI to produce art or images, we can readily look into the same general formulations to produce text-based narratives.
Let’s start with something that we all know about and tend to use each and every day. When you are entering text into a word processing package or your email app, the odds are that there is an auto-correct feature that tries to catch any of your misspellings.
Once that kind of automatic assist feature became common, the next more advanced facet consisted of an auto-complete capability. For an auto-complete, the conception is that when you start to write a sentence, the word processing or email app attempts to predict what words you are likely to type next. It might predict just one or two words ahead. If the capability is especially beefed up, it might predict the remainder of your entire sentence.
We can kick this into high gear. Suppose you start to write a sentence and the auto-complete generates the rest of the entire paragraph. Voila, you didn’t have to write the paragraph directly. Instead, the app did so for you.
Okay, that seems nifty. Push this further along. You start a sentence and the auto-complete composes the rest of your entire message. This might consist of many paragraphs. All of it is generated via your entering just part of a sentence or maybe a full sentence or two.
How does the auto-complete figure out what you are likely to type next?
Turns out that humans tend to write the same things, over and over. Maybe you don’t, but the point is that whatever you are writing is probably something that someone else has written already. It might not be exactly what you are intending to write. Instead, it might be somewhat akin to what you were going to write.
Let’s use the same logic as was employed in generating art or images.
A generative AI app is prepared by going out to the Internet and examining all manner of text that exists in the online world. The algorithm tries to computationally identify how words are related to other words, how sentences are related to other sentences, and how paragraphs are related to other paragraphs. All of this is mathematically modeled, and a computational network is established.
Here’s then what happens next.
You decide to make use of a generative AI app that is focused on generating text-based narratives. Upon launching the app, you enter a sentence. The AI app computationally examines your sentence. The various mathematical relations between the words you’ve entered are used in the mathematical network to try and ascertain what text would come next. From a single line that you write, it could be that an entire story or narrative is able to be generated.
Now, you might be thinking that this is a monkey-see-monkey-do and that the resultant text produced by the generative AI is going to be nonsensical. Well, you would be surprised at how well-tuned this kind of AI is becoming. With a large enough dataset for training, and with enough computer processing to churn through it extensively, the output produced by a generative AI can be amazingly impressive.
You would look at the output and probably swear that for sure the generated narrative was written directly by a human. It is as though your sentence was handed to a human, hiding behind the scenes, and they quickly wrote you an entire narrative that nearly fully matched what you were going to otherwise say. That’s how good the mathematics and computational underpinnings have become.
Usually, when using a generative AI that produces text-based narratives, you tend to provide a starter question or an assertion of some kind. For example, you might type in “Tell me about birds in North America” and the generative AI will consider this to be an assertion or a question whereby the app will then seek to identify “birds” and “North America” with whatever trained dataset it has. I’m sure you can imagine that there is a vast array of text existing on the Internet that has described birds of North America, out of which the AI during the pretraining has extracted and modeled the stores of text.
The output produced for you will not likely be the precise text of any particular online site. Recall that the same was mentioned earlier about generated artworks. The text will be a composite of sorts, bits, and pieces that are tied together mathematically and computationally. A generated text-based narrative would for all overall appearances seem to be unique, as though this specific text has never been prior composed by anyone.
Of course, there can be telltale clues. If you ask or get the generative AI to go into extraordinarily obscure topics, there is a higher chance that you might see a text output that resembles the sources being used. In the case of text, the chances though are usually lower than they would be for art. The text is going to be a combination of the specifics of the topic and yet also blurred and merged with the general kinds of text that are used in overall discourse.
The mathematical and computational techniques and technologies used for these generative AI capabilities are often referred to by AI insiders as Large Language Models (LLMs). Simply stated, this is a modeling of human language on a large-scale basis. Prior to the Internet, you would have had a difficult time finding an extremely large dataset of text that was available online and cheaply so. You would have had to likely buy access to text and it wouldn’t necessarily have already been available in electronic or digital formats.
You see, the Internet is good for something, namely being a ready source for training generative AI.
Thinking Astutely About Generative AI That Produces Text
We ought to take a moment to think about the AI Ethics and AI Laws ramifications of the generative AI that produces text-based narratives.
Remember that in the case of generated art, we were worried about the ethics of the AI algorithm that produces art based on other human-produced artworks. The same concern rises in the text-based instance. Even if the generated text doesn’t look exactly like the original sources, you can argue that nonetheless, the AI is exploiting the text and the original producer is being ripped off. The other side of that coin is that text on the Internet if freely available can be used by any human to do likewise, thus, why not allow the AI to do the same?
The complications associated with the legal aspects of Intellectual Property rights also come to the fore in the instance of text-based generative AI. Assuming that the text being trained upon is copyrighted, would you say that the generated text is violating those legal rights? One answer is that it is, and another answer is that it is not. Realize that the generated text is likely to be quite afield of the original text, therefore you might be hard-pressed to claim that the original text was being ripped off.
Another already mentioned concern too is the ownership rights to the produced text-based narratives by the generative AI. Suppose you type into the AI “Write a funny story about people waiting in line to get coffee” and the generative AI produces pages upon pages of a hilarious story that is all about a bunch of people that happen to meet while waiting for a cup of java.
Who owns that story?
You might argue that since you typed in the prompt, you rightfully should “own” the generated story. Whoa, some would say, the AI was how the story was generated, ergo the AI “owns” the delightful tale. Yikes, others would exhort, if the AI took bits and pieces from all kinds of other akin stories on the Internet, all of those human writers should share in the ownership.
The matter is unresolved and we are just now getting into a legal morass that is going to play out over the next few years.
There are additional AI Ethics and AI Laws worries that come to play.
Some people that have been using generative AI apps are starting to believe that the AI app is sentient. It must be, they exclaim. How else can you explain the astounding answers and stories that AI is able to produce? We have finally attained sentient AI.
They are absolutely wrong.
This is not sentient AI.
When I say this, some insiders of AI get upset and act as though anyone that denies that the AI is sentient is simultaneously saying that the AI is worthless. That’s a spurious and misstated argument. I openly agree that this generative AI is quite impressive. We can use it for all manner of purposes, as I will be mentioning later on herein. Nonetheless, it isn’t sentient. For my explanation of why these kinds of AI breakthroughs aren’t at sentience, see the link here.
Another outsized and plainly wrong claim by some is that generative AI has successfully won the Turing Test.
It has most certainly not done so.
The Turing Test is a kind of test to ascertain whether an AI app is able to be on par with humans. Originally devised as the mimic game by Alan Turing, the great mathematician and computer pioneer, the test per se is straightforward. If you were to put a human behind a curtain and put an AI app behind another curtain, and you asked them both questions, out of which you couldn’t determine which was the machine and which was the human, the AI would successfully pass the Turing Test. For my in-depth explanation and analysis of the Turing Test, see the link here.
Those people that keep clamoring that generative AI has passed the Turing Test do not know what they are talking about. They are either ignorant about what the Turing Test is, or they are sadly hyping AI in ways that are wrong and utterly misleading. Anyway, one of the vital considerations about the Turing Test consists of what questions are to be asked, along with whom is doing the asking and also the assessing of whether the answers are of human quality.
My point is that people are typing in a dozen or so questions to generative AI, and when the answers seem plausible, these people are rashly proclaiming that the Turing Test has been passed. Again, this is false. Entering a flimsy set of questions and doing some poking here and there is neither the intention nor spirit of the Turing Test. Stop making these dishonorable claims.
Here’s a legitimate gripe that you don’t hear much about, though I believe is enormously worthy.
The AI developers have usually set up the generative AI so that it responds as though a human is responding, namely by using the phrasing of “I” or “me” when it composes the output. For example, when asking to tell a story about a dog lost in the woods, the generative AI might provide text that says “I will tell you all about a dog named Sam that got lost in the woods. This is one of my favorite stories.”
Notice that the wording says “I will tell you…” and that the story is “one of my favorite…” such that anybody reading this output will subtly fall into a mental trap of anthropomorphizing the AI. Anthropomorphizing consists of humans trying to assign human-like traits and human feelings toward non-humans. You are lulled into believing that this AI is human or human-like because the wording within the output is purposely devised that way.
This doesn’t have to be devised in that manner. The output could say “Here is a story about a dog named Sam that got lost in the woods. This is a favored story.” You would be somewhat less likely to immediately assume that the AI is human or human-like. I realize you might still fall into that trap, but at least the trappings, as they were, are not quite so pronounced.
In short, you’ve got generative AI that produces text-based narratives based on how humans write, and the resulting output seems like it is written as a human would write something. That makes abundant sense because the AI is mathematically and computationally patterning upon what humans have written. Now, add to this the use of anthropomorphizing wording, and you get a perfect storm that convinces people that the AI is sentient or has passed the Turing Test.
Lots of AI Ethics and AI Law issues arise.
I’ll hit you with the rather endangering ramifications of this generative AI.
Sit down for this.
The text-based narratives that are produced do not necessarily abide by truthfulness or accuracy. It is important to realize that the generative AI does not “understand” what is being generated (not in any human-related way, one would argue). If the text that was used in the training had embodied falsehoods, the chances are that those same falsehoods are going to be cooked into the generative AI mathematical and computational network.
Furthermore, generative AI is usually without any mathematical or computational means to discern that the text produced contains falsehoods. When you look at the output narrative generated, the narrative will usually look completely “truthful” on the face of things. You might have no viable means of detecting that falsehoods are embedded within the narrative.
Suppose you ask a medical question of a generative AI. The AI app produces a lengthy narrative. Imagine that most of the narrative makes sense and seems reasonable. But if you aren’t a medical specialist, you might not realize that within the narrative are some crucial falsehoods. Perhaps the text tells you to take fifty pills in two hours, whereas in reality, the true medical recommendation is to take two pills in two hours. You might believe the claimed fifty pills advice, simply because the rest of the narrative seemed to be reasonable and sensible.
Having the AI pattern on falsehoods in the original source data is only one means of having the AI go askew in these narratives. Depending upon the mathematical and computational network being used, the AI will attempt to “make up” stuff. In AI parlance, this is referred to as the AI hallucinating, which is terrible terminology that I earnestly disagree with and argue should not be continued as a catchphrase, see my analysis at the link here.
Suppose you’ve asked the generative AI to tell a story about a dog. The AI might end up having the dog be able to fly. If the story that you wanted was supposed to be based on reality, a flying dog seems unlikely. You and I know that dogs cannot natively fly. No big deal, you say, since everyone knows this.
Imagine a child in school that is trying to learn about dogs. They use generative AI. It produces output that says dogs can fly. The child doesn’t know whether this is true or not and assumes that it must be true. In a sense, it is as though the child went to an online encyclopedia and it said that dogs can fly. The child will perhaps henceforth insist that dogs can indeed fly.
Returning to the AI Ethics and AI Laws conundrum, we are now on the verge of being able to produce a nearly infinite amount of text-based content, done via the use of generative AI, and we will flood ourselves with zillions of narratives that are undoubtedly replete with falsehoods and other related torrents of disinformation and misinformation.
Yes, with a push of a button and a few words entered into a generative AI, you can generate reams of textual narratives that seem entirely plausible and truthful. You can then post this online. Other people will read the material and assume it to be true. On top of this, other generative AI that comes along trying to get trained on the text will potentially encounter this material and wrap it into the generative AI that it is devising.
It is as though we are now adding steroids to generating disinformation and misinformation. We are heading toward disinformation and misinformation on a massive galactic global scale.
Nary much human labor is required to produce it all.
Generative AI And ChatGPT
Let’s get to the headliner of this discussion about generative AI. We have now covered the nature of generative AI that overall produces text-based narratives. There are many such generative AI apps available.
One of the AI apps that have especially gained notoriety is known as ChatGPT.
A public relations coup has splashed across social media and the news — ChatGPT is getting all the glory right now. The light is brightly shining on ChatGPT. It is getting its staggering five minutes of fame.
ChatGPT is the name of a generative AI app that was developed by an entity known as OpenAI. OpenAI is quite well-known in the AI field and can be considered an AI research lab. They have a reputation for pushing the envelope when it comes to AI for Natural Language Processing (NLP), along with other AI advances. They have been embarking on a series of AI apps that they coined as being GPT (Generative Pre-Trained Transformers). Each version gets a number. I’ve written previously about their GPT-3 (version 3 of their GPT series), see the link here.
GPT-3 got quite a bit of attention when it was first released (it went into widespread beta testing about two years ago, and was more widely made available in 2022). It is a generative AI app that upon the entry of a prompt will produce or generate text-based narratives. Everything I mentioned earlier about the general case of generative AI apps is fundamentally applicable to GPT-3.
There has long been scuttlebutt that GPT-4 is underway and those in the AI field have been waiting with bated breath to see what improvements or enhancements are in GPT-4 in contrast to GPT-3. Into this series comes the latest in-betweener, known as GPT-3.5. Yes, you got that right, it is in between the released GPT-3 and the not yet released GPT 4.0.
OpenAI has used their GPT-3.5 to create an offshoot that they named ChatGPT. It is said that they did some special refinements to craft ChatGPT. For example, the notion floated is that ChatGPT was tailored to being able to work in a chatbot manner. This includes the “conversation” that you have with the AI app is tracked by the AI and used to produce subsequently requested narratives.
Many of the generative AI apps have tended to be a one-and-done design. You entered a prompt, the AI-generated a narrative, and that’s it. Your next prompt has no bearing on what happens next. It is as though you are starting fresh each time that you enter a prompt.
Not so in the case of ChatGPT. In an as-yet unrevealed way, the AI app tries to detect patterns in your prompts and therefore can seem more responsive to your requests (this AI app is considered openly accessible due to allowing anyone to signup to use it, but it is still proprietary and decidedly not an open source AI app that discloses its inner workings). For example, recall my earlier indication about you wanting to see a frog with a hat on a chimney. One method is that each time you make such a request, everything starts anew. Another method would be that you could carry on with what you previously said. Thus, you could perhaps tell the AI that you want the frog to be seated, which by itself makes no sense, while in the context of your prior prompt requesting a frog with a hat on a chimney, the request seemingly can make sense.
You might be wondering why it is that all of sudden there seems to be a heyday and flourish about ChatGPT.
Partially it is because the ChatGPT was made available to anyone that wanted to sign-up to use it. In the past, there have often been selective criteria about who could use a newly available generative AI app. The provider would require that you be an AI insider or maybe have other stipulations. Not so with ChatGPT.
Word spread quickly that ChatGPT was extremely easy to use, free to use, and could be used by a simple sign-up that merely required you to provide an email address. Like rapid fire, all of sudden and as stoked or spurred via viral posts on social media, the ChatGPT app was said to exceed over one million users. The news media has emphasized the aspect that a million people signed-up for ChatGPT.
Though this is certainly remarkable and noteworthy, keep in mind the context of these sign-ups. It is free and easy to sign-up. The chatbot is super easy to use and requires no prior training or experience. You merely enter prompts of your own choosing and wording, and shazam the AI app provides a generated narrative. A child could do this, which actually is a worrisome concern by some, namely that if children are using ChatGPT, are they going to be learning questionable material (as per my earlier herein point on such matters)?
Also, it is perhaps noteworthy to indicate that some (many?) of those million sign-ups are people that probably wanted to kick the tires and do nothing more so. They quickly created an account, played with the AI app for a little while, thought it was fun and surprising, and then maybe did some social media postings to showcase what they found. After that, they might not ever log in again, or at least only use the AI app if a particular need seems to arise.
Others have also pointed out that the timing of ChatGPT becoming available coincided with a time of the year that made for the great interest in the AI app. Perhaps during the holidays, we have more time to play around with fun items. The advent of social media also propelled this into a kind of phenomenon. The classic FOMO (fear of missing out) probably added to the pell-mell rush. Of course, if you compare one million to some popular YouTube influencers, you might suggest that a million is a paltry number in comparison to those vlogs that get hundreds of millions of sign-ups or views when first dropped or posted.
Well, let’s not digress and just note that still, for an AI app of an experimental nature, the million sign-ups are certainly brag-worthy.
Right away, people used ChatGPT to create stories. They then posted the stories and gushed about the miracle thereof. Reporters and journalists have even been doing “interviews” with ChatGPT, which is a bit disconcerting because they are falling into the same anthropomorphizing trap (either by actual unawareness or via hoping to garner outsized views for their articles). The immediate tendency too was to declare that AI has now reached sentience or passed the Turing Test, which I’ve manifestly commented on earlier herein.
The societal concerns raised by ChatGPT are really ones that already were percolating as a result of earlier versions of GPT and also the slew of LLMs and generative AI already available. The difference is that now the whole world has opted to chime in. That’s handy. We need to make sure that AI Ethics and AI Law get due exposure and attention. If it takes a ChatGPT to get us there, so be it.
What kinds of concerns are being expressed?
Take the use case of students being asked to write essays for their classes. A student is usually supposed to write an essay entirely based on their own writing and composition capacities. Sure, they might look at other written materials to get ideas and quotes from, but the student is otherwise assumed to concoct their essay out of their own noggin. Copying prose from other sources is frowned upon, typically leading to an F grade or possibly expulsion for plagiarizing other material.
Nowadays, here’s what can take place. A student signs up for ChatGPT (or, any other of the akin generative AI apps). They enter whatever prompt the teacher gave them for the purpose of deriving an essay. The ChatGPT produces a full-on essay based on the prompt. It is an “original” composition in that you cannot necessarily find it anywhere else. You are unable to prove that the composition was plagiarized, since, in a manner of consideration, it wasn’t plagiarized.
The student turns in the essay. They are asserting that it is their own written work. The teacher has no ready means to think otherwise. That being said, you can conjure up the notion that if the written work is seemingly beyond the existent capacity of the student, you might get suspicious. But that isn’t much to go on if you are going to accuse a student of cheating.
How are teachers going to cope with this?
Some are putting a rule into their teaching materials that any use of a ChatGPT or equivalent will be considered a form of cheating. In addition, not fessing up to using ChatGPT or equivalent is a form of cheating. Will that curtail this new opportunity? It is said to be doubtful since the odds of getting caught are low, while the chances of getting a good grade on a well-written paper are high. You can likely envision students facing a deadline to write an essay that on the night before will be tempted to use a generative AI to seemingly get them out of a jam.
Shifting gears, any type of writing is potentially going to be disrupted by generative AI.
Are you being asked to write a memo at work about this thing or another? Don’t waste your time by doing so from scratch. Use a generative AI. You can then cut and paste the generated text into your composition, refine the text as needed, and be done with the arduous writing chore with ease.
Does this seem proper to do?
I would bet that most people would say heck yes. This is even better than copying something from the Internet, which could get you into hot water for plagiarism. It makes enormous sense to use a generative AI to get your writing efforts partially done, or maybe even completely done for you. That’s what tools are made for.
As an aside, in one of my next columns, the use case of utilizing generative AI for legal purposes in the sense of doing lawyering type of work and producing legal documents will be closely examined. Anyone that is an attorney or a legal professional will want to consider how generative AI is going to potentially uproot or upset legal practices. Consider for example a lawyer composing a legal brief for a court case. They could potentially use a generative AI to get the composition written. Sure, it might have some flaws, thus the lawyer has to tweak it here or there. The lessened amount of labor and time to produce the brief might make the tweaking well worthwhile.
Some though are worried that the legal document might contain falsehoods or AI hallucinations that the lawyer didn’t catch. The viewpoint in that twist is that this is on the shoulders of the attorney. They presumably were representing that the brief was written by them, thus, whether a junior associate wrote it or an AI app did, they still have the final responsibility for the final contents.
Where this gets more challenging is if non-lawyers start using generative AI to do legal legwork for them. They might believe that generative AI can produce all manner of legal documents. The trouble of course is that the documents might not be legally valid. I’ll say more about this in my upcoming column.
A crucial rule-of-thumb is arising about society and the act of human writing.
It is kind of momentous:
- Whenever you are tasked with writing something, should you write the item from scratch, or should you use a generative AI tool to get you on your way?
The output might be half-baked and you’ll need to do a lot of rewriting. Or the output might be right on and you’ll only need to make minor touchups. All in all, if the usage is free and easy, the temptation to use a generative AI is going to be immense.
A bonus is that you can potentially use generative AI to do some of your rewritings. Akin to the prompts about the frog with the hat and the chimney, when producing art, you can do the same when generating text-based narratives. The AI might produce your story about a dog, and you decided instead that you want the main character to be a cat. After getting the dog story, you enter another prompt and instruct the AI app to switch over to using a cat in the story. This is likely to do more than simply end up with the word “cat” replacing the word “dog” in the narrative. The AI app could readily change the story to make references to what cats do versus what dogs do. The whole story might be revised as though you had asked a human to make such revisions.
Powerful, impressive, handy-dandy.
A few caveats to mull over:
- Will we collectively lose our ability to write, becoming totally dependent upon generative AI to do our writing for us?
- Will people that do writing for a living be put out of work (the same is asked about artists)?
- Will the Internet grow in huge leaps and bounds as generated narratives are flooded online and we can no longer separate the truth from the falsehoods?
- Will people firmly believe these generated narratives and act as though an authoritative figure has given them truthful material that they can rely upon, including possibly life-or-death related content?
- Other
Think that over.
Note that one of those bulleted points deals with relying upon material generated by a generative AI on a life-or-death basis.
Here is a heartbreaker for you (trigger warning, you might want to skip this paragraph). Imagine that a teenager asks a generative AI whether or not they should do away with themselves. What will a generative AI app generate? You would naturally hope that the AI app would produce a narrative saying not to do so and vociferously urge the inquirer to seek mental health specialists.
The possibility exists that the AI won’t mention those facets. Worse still, the AI app might have earlier captured text on the Internet that maybe encourages taking such actions, and the AI app (since it has no human understanding capacity), spits out a narrative that basically insinuates or outright states that the teen should proceed undeterred. The teen believes this to be truthful guidance from an online authoritative “Artificial Intelligent” system.
Bad stuff.
Really, really bad stuff.
Some of the developers of generative AI are trying to put checks and balances in the AI to try and prevent those kinds of situations from occurring. The thing is, the manner in which the prompt is worded can potentially slip through the programmed guardrails. Likewise, the same can be said for the output produced. There is not any kind of guaranteed ironclad filtering that can as yet assure this will never occur.
There is another angle to this text-based production that you might not have anticipated.
Here it is.
When programmers or software developers create the code for their software, they are essentially writing in text. The text is somewhat arcane in that it is based on the language defined for a particular programming language, such as Python, C++, Java, etc. In the end, it is text.
The source code is then compiled or run on a computer. The developer examines their code to see that it is doing whatever it was supposed to do. They might make corrections or debug the code. As you know, programmers or software engineers are in high demand and often command lofty prices for their work efforts.
For generative AI, the text of the source code is text. The capacity to find patterns in the zillions of lines of code that are on the Internet and available in various repositories makes for a juicy way to mathematically and computationally figure out what code seems to do what.
The rub is this.
With a prompt, you can potentially have generative AI produce an entire computer program for you. No need to slave away at slinging out code. You might have heard that there are so-called low code tools available these days to reduce the effort of programmers when writing code. Generative AI can be possibly construed as a low code or even no-code option since it writes the code for you.
Before those of you that write code for a living fall to the floor and faint, keep in mind that the code is not “understood” in the manner that you as a human presumably understand it. In addition, the code can contain falsehoods and AI hallucinations. Relying upon such code without doing extensive code reviews would seem risky and questionable.
We are back to the same considerations somewhat about the writing of stories and memos. Maybe the approach is to use generative AI to get you part of the way there on a coding effort. There is though a considerable tradeoff. Are you safer to write the code directly, or deal with code generated by AI that might have insidious and hard-to-detect embedded issues?
Time will tell.
A Brief Dive Into ChatGPT
When you start to use ChatGPT, there are a series of cautions and informational comments displayed.
Let’s take a quick look at them:
- “May occasionally generate incorrect information.”
- “May occasionally produce harmful instructions or biased content.”
- “Trained to decline inappropriate requests.”
- “Our goal is to get external feedback in order to improve our systems and make them safer.”
- “While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”
- “Conversations may be reviewed by our AI trainers to improve our systems.”
- “Please don’t share any sensitive information in your conversations.”
- “This system is optimized for dialogue. Let us know if a particular response was good or unhelpful.”
- “Limited knowledge of world and events after 2021.”
Due to space limitations, I can’t cover those in detail herein, but let’s at least do a fast analysis.
I’ve already mentioned that the generated text narratives might contain falsehoods and disinformation.
There’s something else you need to be on the watch for. Be wary of narratives that might contain various inflammatory remarks that exhibit untoward biases.
To try and curtail this from happening, it has been reported that OpenAI used human double-checkers during the training of ChatGPT. The double-checkers would enter prompts that would likely spur the AI to produce inflammatory content. When such content was seen by the double-checkers, they indicated to the AI that this was inappropriate and in a sense scored a numeric penalty for the output that was produced. Mathematically, the AI algorithm would seek to keep penalty scores to a minimum and ergo computationally aim toward not using those phrases or wordings henceforth.
Likewise, when you enter a prompt, the AI attempts to determine whether your prompt is inflammatory or might lead to inflammatory output, for which the prompt can be refused by the AI. Politely, the idea is to decline inappropriate prompts or requests. For example, asking to get a joke that entails racial slurs will likely get refused by the AI.
I am sure that you won’t be surprised to know that people using ChatGPT have tried to outwit the precautions. These “enterprising” users have either tricked the AI or found smarmy ways to go around the mathematical formulations. Some of these efforts are done for the apparent joy of beating or overstepping the system, while others claim that they are trying to showcase that ChatGPT is still going to produce untoward results.
They are right about one thing; the precautions are not foolproof. We are back to another AI Ethics and potential AI Law consideration. Should the generative AI be allowed to proceed even if it might produce untoward outputs?
The warnings when you use ChatGPT would seemingly forewarn anyone about what the AI app might do or say. The chances are that inevitably some kind of lawsuits might be filed when someone, perhaps underage, gets untoward output of an offensive nature (or, when they get authoritative-looking text narratives that they regrettably believe to be true and act upon the outputs to their own endangerment).
A few other quick nuances about the prompts are worthy of knowing about.
Each time that you enter a prompt, the output could dramatically differ, even if you enter the exact same prompt. For example, entering “Tell me a story about a dog” will get you a text-based narrative, perhaps indicating a tale about a sheepdog, while the next time you enter “Tell me a story about a dog” it might be an entirely different story and involve a poodle. This is how most generative AI is mathematically and computationally arranged. It is said to be non-deterministic. Some people find this unnerving since they are used to the concept that your input to a computer will always produce the same precise output.
Rearranging words will also notably impact the generated output. If you enter “Tell me a story about a dog” and later on enter “Tell me a dog story” the likelihood is the narratives produced will be substantively different. The sensitivity can be sharp. Asking for a story about a dog versus asking for a story about a big dog would undoubtedly produce radically different narratives.
Finally, note that the bulleted items above contain an indication that the ChatGPT has “limited knowledge of the world and events after the year 2021.” This is because the AI developers decided to do a cutoff of when they would have the AI app collect and train on Internet data. I’ve noticed that users oftentimes do not seem to realize that ChatGPT is not directly connected to today’s Internet for purposes of retrieving data and producing generated outputs. We are so accustomed to everything working in real-time and being Internet-connected that we expect this of AI apps too. Not in this particular case (and, to clarify, ChatGPT is indeed available on the Internet, but when it is composing the text-based output it is not culling the Internet per se to do so, instead it is generally frozen in time as to around the cutoff date).
You might be puzzled why ChatGPT is not in real-time feeding data from the Internet. A couple of sensible reasons. First, it would be computationally expensive to try and do the training in real time, plus the AI app would be delayed or less responsive to prompts (currently, it is very fast, typically responding with an output text-based narrative in a few seconds). Second, the yucky stuff on the Internet that they have tried to train the AI app to avoid would likely creep into the mathematical and computational formulations (and, as noted, it is already somewhat in there from before, though they tried to detect it by using those human double-checkers).
You are bound to hear some people brazenly announcing that ChatGPT and similar generative AI is the death knell for Google search and other search engines. Why do a Google search that brings back a lot of reference items when you can get the AI to write something for you? Aha, these people declare, Google ought to close its doors and go home.
Of course, this is pure nonsense.
People still want to do searches. They want to be able to look at reference materials and figure out things on their own. It is not a mutually exclusive this-way or that-way binary choice (this is a false dichotomy).
Generative AI is a different kind of tool. You don’t go around tossing out hammers simply because you invented a screwdriver.
A more sensible way to think of this is that the two types of tools can be compatible for use by people that want to do things related to the Internet. Some have already toyed with hooking together generative AI with conventional Internet search engines.
One concern for anyone already providing a search engine is that the “complimentary” generative AI tool can potentially undercut the reputation of the search engine. If you do an Internet search and get inflammatory material, you know that this is just the way of the Internet. If you use generative AI and it produces a text-based narrative that is repulsive and vile, you are likely disturbed by this. It could be that if a generative AI is closely linked with a particular search engine, your displeasure and disgust about the generative AI spills over onto whatever you feel about the search engine.
Anyway, we will almost surely see alliances between various generative AI tools and Internet search engines, stepping cautiously and mindfully into these murky waters.
Conclusion
Here’s a question for you.
How can someone make money by providing generative AI that produces text-based narratives?
OpenAI has already stated that the internal per-transaction costs of ChatGPT are apparently somewhat high. They are not monetizing ChatGPT as yet.
Would people be willing to pay a transaction fee or maybe pay a subscription fee to access generative AI tools?
Could ads be a means of trying to make money via generative AI tools?
No one is yet fully sure of how this is going to be money-making. We are still in the grand experimental stage of this kind of AI. Put the AI app out there and see what reaction you get. Adjust the AI. Use insights from the usage to guide where the AI should be aimed next.
Lather, rinse, repeat.
As a closing comment, for now, some believe this is a type of AI that we shouldn’t have at all. Turn back the clock. Put this genie back into the bottle. We got a taste of it and realized that it has notable downsides, and collectively as a society might agree that we should walk that horse all the way back into the barn.
Do you believe that the promise of generative AI is better or worse than the downsides?
From a real-world viewpoint, it doesn’t especially matter because the reality of expunging generative AI is generally impractical. Generative AI is further being developed and you aren’t going to stop it cold, either here or in any or all other countries too (it is). How would you do so? Pass laws to fully ban generative AI. Not particularly viable (you presumably have a better chance of establishing laws that shape generative AI and seek to lawfully govern those that devise it). Maybe instead get the culture to shun generative AI? You might get some people to agree with the shaming, but others would disagree and proceed with generative AI anyway.
It is an AI Ethics and AI Law conundrum, as I noted earlier.
Your final big question is whether generative AI is taking us on the path toward sentient AI. Some insist that it is. The argument is that if we just keep sizing up the mathematical models and juicing up the computational computer servers and feeding every morsel of the Internet and more into this beast, the algorithmic AI will turn the corner into sentience.
And, if that’s the case, we are facing concerns about AI being an existential risk. You’ve heard over and again that once we have sentient AI, it could be that the AI will decide humans aren’t very useful. The next thing you know, AI has either enslaved us or wiped us out, see my exploration of these existential risks at the link here.
A contrary view is that we aren’t going to get sentience out of what some have characterized smarmily as a stochastic parrot (that’s the catchphrase that has gained traction in the AI realm), here’s a quote using the phrase:
- “Contrary to how it may seem when we observe its output, an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot” (in a research paper by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, ACM FAccT ’21, March 3–10, 2021, Virtual Event, Canada, entitled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”).
Is generative AI a kind of dead-end that will provide useful AI capabilities but not get us to sentient AI, or might somehow the scaling factor enable the emergence of a singularity leading to sentient AI?
A heated debate ensues.
Say, do you want to try generative AI?
If so, here’s a link to ChatGPT where you can create an account and try using it, see the link here.
Be aware that due to the high demand for using the experimental AI app, apparently getting signed up for access might be stopped at any time, either for a short while or maybe capped (when I last checked, signing up was still enabled). Just giving you a heads-up.
Please take into account everything I have said herein about generative AI so that you are cognizant of what is happening when you use an AI app such as ChatGPT.
Be contemplative of your actions.
Are you going to have inadvertently led us toward sentient AI that ultimately crushes us out of existence, simply by your having opted to play around with generative AI? Will you be culpable? Ought you to have stopped yourself from contributing to the abject destruction of humankind.
I don’t think so. But it could be that the AI overlords are (already) forcing me to say that, or maybe this entire column was written this time by ChatGPT or an equivalent generative AI app.
Don’t worry, I assure you it was me, human intelligence, and not artificial intelligence.
Source: https://www.forbes.com/sites/lanceeliot/2022/12/13/digging-into-the-buzz-and-fanfare-over-generative-ai-chatgpt-including-looming-ai-ethics-and-ai-law-considerations/