Enraged Worries That Generative AI ChatGPT Spurs Students To Vastly Cheat When Writing Essays, Spawns Spellbound Attention For AI Ethics And AI Law

Is the written essay by modern-day students a nevermore?

Is the angst-filled student term paper going feverishly out the window?

That’s the brouhaha that has erupted into an all-out uproar recently. You see, the appearance of an AI app known as ChatGPT has gotten a lot of attention and equally garnered a great deal of anger. For my comprehensive coverage of ChatGPT, see the link here. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The gist of the hollering and bellowing is that this kind of AI, typically referred to as generative AI, will be the death knell for asking students to do essay-style assignments.

Why so?

Because the latest in generative AI is able to produce seemingly fluent essays by the mere entry of a simple prompt. If you enter a line such as “Tell me about Abraham Lincoln” the AI will generate an essay about the life and times of Lincoln that is often good enough to be mistaken for having been written entirely and exclusively by human hands. Furthermore, and here’s the real kicker, the essay will not be a duplicate or noticeable copy of something else already written on the same topic. The essay producing will be essentially an “original” as far as any casual inspection would ascertain.

A student faced with a writing assignment can merely invoke one of these generative AI apps, enter a prompt, and voila, their entire essay has been written for them. They only have to cut and paste the automatically generated text into an empty document, surreptitiously slap their name and class info onto it, and with a bit of a rather gutsy bravado go ahead and turn it in as their own work.

The chances of a teacher being able to ferret out that the essay was written by AI and not by the student are nearly next to zero.

Scandalous!

Headlines have been proclaiming hastily that we have reached the bitter end of having students write essays or do essentially any kind of outside-of-class writing assignments. The only means to cope with the situation would seem to be making use of in-class essay writing. When students are in a controlled environment such as a classroom and assume that they don’t have access to laptops or their smartphones, they will find themselves confined to writing essays the old-fashioned way.

To clarify, the old-fashioned way means they will have to write solely via the use of their own noggins.

Any kind of essay done outside the classroom will be immediately suspected. Did the student write the essay or did an AI app do so? As mentioned, the essay will be so well written that you cannot readily detect that it was written by a machine. The spelling will be impeccable. The syntax will be tremendous. The line of discourse and potential coached arguments made will be compelling.

Heck, in a manner of speaking, you could suggest that the generative AI will tip its proverbial hand by making an essay that is beyond the capabilities of the student that opts to take this nefarious path. A teacher might get suspicious simply due to the essay being a bit too good. A savvy teacher would be tempted to guess that the student could not have written such elegant and airtight prose. Internal alarm bells start ringing.

Of course, challenging a student about their essay will be ugly and can have adverse consequences.

Suppose the student carefully wrote the essay, all by themselves. They might have double and triple-checked it. There is a chance too that maybe they had a friend or acquaintance take a look to spot anything needing extra polishing. All in all, it is still their essay as written by them. Imagine a teacher asking this serious and studious student pointed questions about the essay. The embarrassment and chagrin at essentially being accused of cheating are palpable, even if the teacher doesn’t aloud make such a claim. The mere confrontation itself is enough to undercut the esteem of the student and make them feel falsely slandered.

Some are insisting that any teacher with suspicions about the authorship of an essay ought to ask the student to explain what they wrote. Presumably, if the essay was written by the student, the particular student can adequately explain it. Teachers have done this kind of inquiry for eons. A student might have corralled another student into writing their essay for them. The student might have gotten a parent to write their essay. In today’s world, the student might pay someone across the Internet to secretly write their essay on their behalf.

Thus, asking a student to verify the authorship via an in-classroom inquiry is customary and not a big deal.

I’m glad you brought that up.

Attempting to grill a student mildly or demonstrably is not quite as straightforward a litmus test as you might think. The student could have studied closely the AI-produced essay and gotten themselves prepared for a potential interrogation.

Think of it this way. The student first generates the essay with merely a push of a button. The student then spends gobs of time that they would have devoted to writing the essay instead meticulously examining and studying the essay. After a while, the words are almost totally committed to memory. The student nearly deludes themselves into believing they did indeed write the essay. This semblance of confidence and awareness could readily get them through teacher-led scrutiny.

Aha, some say with a bit of a counterpoint to the fears of generative AI apps, note that the student did in fact “learn” something by having generated the essay. Sure, the student didn’t do the legwork to research the topic, and nor did they compose the essay, but nonetheless, if they carefully studied the essay, it seems to show that they have learned about the assigned topic. The student that commits to learning by heart the essay about Lincoln has presumably learned something of substance about Lincoln.

Learning has happened.

Whoa, the retort goes, the assignment was likely a twofold process. Learning about Lincoln might have been relatively secondary. The real purpose was to have the student learn to write. This essential part of the assignment has been completely undercut. Teachers often assign open-ended topics and are really just aiming to have the student get to experience writing. You have to lay out what you want to write, you have to figure out the words you’ll use, you have to put the words into a sensible set of sentences and paragraphs, and so on. Merely reading an AI-produced essay does not at all comport with that foundational aspect of an essay assignment.

The counterpunch to this is the claim that the student is potentially learning about writing by closely examining the writing produced by the AI. Don’t we all study the grandmasters of writing to see how they write? Our writing is an attempt to reach the likes of Shakespeare and other great writers. Studying the written word is a valid means of garnering how to write.

Like a fierce tennis match, the ball moves to the other side of the net. Though studying good writing is good, you have to ultimately write if you want to be able to write. You cannot just endlessly read and then blankly assume that the student now knows how to write. They have to write, and write, and keep writing until they are able to tangibly showcase and improve their writing capabilities.

Do you see how this is all quite a conundrum?

Be aware there are about a zillion or more twists to all of this.

I’ll cover some of the more ingenious and interesting twists and turns.

Tuning The Essay Via AI Prompting

Having just mentioned Shakespeare, here’s an aspect of generative AI that might be surprising to you. In many of the generative AI apps, you can say something like this: “Write an essay about Lincoln as though Shakespeare wrote the essay.” The AI will attempt to generate an essay that seems to be written in the language customarily used by Shakespeare in his writings. It is quite a fun and engaging feat to see and many get quite a kick out of this.

How does this relate to the student that is “cheating” by using generative AI to write their essays?

In many generative AI apps, you can tell the AI to write in a less-than-stellar fashion. The AI will seek to produce an essay that is somewhat rough around the edges. There are syntax issues here or there. The logic of the essay might be jumpy or slightly disjointed.

This would be a clever ruse. The student takes the resultant essay and turns it in. The essay is good enough to get a top grade, but meanwhile not so perfect that it gets the ire raised of the teacher. Once again, the AI has done all the legwork for the student, including making the essay somewhat imperfect.

On top of this, most of the generative AI apps allow you to make use of the app for as much as you wish to do so. Here’s how that comes to play. A student types in that the AI app is to make a somewhat imperfect essay about Lincoln. The essay is produced. The student looks at the essay and realizes it is still overly perfect. The student enters another prompt that instructs the AI to make the imperfections more pronounced.

Lather, rinse, repeat.

The student keeps entering prompts and inspecting the essays produced. Over and over this occurs. Eventually, the student gets the AI to just the right level of imperfection in the essay. The goldilocks version has been attained. It is just perfect enough to get a high grade, and just imperfect enough to keep from arousing suspicions.

I’m sure that some of you are smarmily saying that if the student had just opted to write the darned essay in the first place, they would have maybe spent less time or at least the same amount of time in writing the essay itself. All this energy-sapping use of the AI app could have been directed at simply proceeding to write the essay.

Well, remember, the student doesn’t have that in mind. The ease of entering prompts and iteratively reviewing and selecting the desired essay is bound to be much easier for the student to do. An hour of doing this is a lot less arduous than writing the essay directly. Smarminess in this case has to be weighed against reality.

What Happens If Other Students Do The Same

I’d bet that you had this clever thought in mind as you were reading the preceding analysis about essays and generative AI apps, namely that the student will undoubtedly get caught if lots of other students are doing the same.

Allow me to explain.

A teacher assigns their entire class to write an essay about Lincoln. Suppose that 90% of the students decide to use a generative AI app for this assignment. If 90% seems overly depressing, go ahead and use 10% instead. Just keep in mind that as students get wind of the utility of generative AI apps, the temptation to use them is going to mushroom.

Okay, so a notable percentage of the class uses a generative AI app. You would assume that ergo the students are all going to be turning in roughly the same Lincoln essay. The teacher will notice by the time they grade the third or fourth essay that the essays are all pretty much the same. This will be a huge clue that something is amiss.

Sorry, but you are unlikely to be that lucky.

Most generative AI apps are highly sensitive to how a prompt is particularly composed. If I write “Tell me about Lincoln” versus if I write “Tell me about the life of Lincoln” the odds are that the essays are going to be substantively different. In the first instance, maybe the essay produced by the AI focuses on President Lincoln during his White House tenure and omits anything about his childhood. The other prompt might produce an essay covering his birth to his death.

Students are probably not going to enter precisely whatever the teacher gave them as the prompt for the essay. It would seem sensible, as a cheater, to try variations. But even if all of the students enter the exact same prompt, the odds are pretty good that each essay will be somewhat different from the others.

These AI apps make use of a vast internally crafted mathematical and computational network that basically has broadly pattern matched on text found across the Internet. Included in the process of generating an essay is a probabilistic factor. The words chosen are unlikely to be in the same order and of the same exact wording. Each essay generated will generally be different.

There is one catch though to this. If the topic chosen is quite obscure, there is a chance that some of the essays produced will resemble each other. That would partially be because the pattern at the root of the text was thin to start with. That being said, the way in which the essay is composed could still be quite different. All I’m saying is that the essence of the content per se could potentially be roughly the same.

Not wanting to seem glum, but you could potentially make the same claim about a common topic like the life of Lincoln. How many different ways can you elaborate on the overall aspects of his life? If you somehow secured students in a locked classroom to write about Lincoln and gave them online access to research his life, I dare say that the chances of the essays being somewhat similar could happen anyway.

The Free And Easy Factor Is Substantial

If a student nowadays wants to cheat by paying someone across the Internet to write their essay, it is very simple to do so (I hope that doesn’t shock you, maybe I should have proffered a trigger warning beforehand).

The problem though is that you do need to pay for the essay. Also, there is some tiny chance that you could, later on, get caught, maybe. Did you use a credit card to pay for the essay? Perhaps better to use some form of underground payment processing to try and keep your tracks clear.

The beauty or perhaps the exasperating factor of generative AI is that right now most of them are available free of charge. No payment is required. No particular track record of your usage (well, to be clear, the AI app might be keeping track of your usage, especially since many of the AI apps require that you signup with an email address, but of course, you can fake that too).

Some people naturally assume that you need to be an AI wizard to use a generative AI app.

Not so.

By and large, generative AI apps are astonishingly simple to use. You invoke the AI app. It presents you with an open textbox for you to enter your prompt. You enter a prompt and hit submit. The AI app generates the text.

That’s about it.

No specialized computer languages are needed. No knowledge of databases or data science. I assure you that just about any child in school can readily use a generative AI app. If a child can type, they can use these apps.

Some argue that the companies that provide the generative AI apps ought to first verify the age of the user, presumably to prevent non-adults from using the AI for cheating purposes when writing essays. If the user indicates they are not an adult, don’t let them use the AI app. Frankly, that is an unlikely prevention scenario, unless somehow AI-related laws have been enacted that try to establish these kinds of restrictions. Even if such laws are passed, you can likely get around this by using a generative AI app that is hosted in another country, etc.

Another prohibitive angle would be if the generative AI apps cost money to use. Suppose there was a per transaction fee or a subscription fee. This would put the generative AI app on par with those humans across the Internet that will write an essay for you that charge you to do so. Labor would go head-to-head with AI (as an aside, this all does suggest that humans that for a living write essays for students are going to be replaced by AI that does the same; the question is should we be saddened or pleased that those humans making such a living will no longer be able to do so in that manner).

The companies making generative AI apps are certainly desirous of making money from these apps, though how to do so is still up in the air. Charging a transaction fee, subscription fee, or maybe charging per word generated are all on the table. Rather than charging people, monetization might be done via the use of ads. Perhaps each time that you use a particular generative AI app, you first have to see an ad. That might be a money maker.

I hate to spill milk on this but as a means of overcoming student cheating, it isn’t going to be any kind of silver bullet. Not even close.

There are open-source versions of generative AI. People put those out there and others are apt to make the app available for free. One way or another, even if some companies charge a fee, you will be able to find variants that are free to use, though you might need to see ads or maybe signup and give away some info about yourself for marketing purposes.

Does The Multi-Step Help This

A student opts to use a generative AI app to produce their essay.

Rather than straightaway turning in the essay, the student decides to edit the essay. They judiciously take out a few words here. Put in a few words there. Move a sentence up. Move a sentence further down. After a bit of editing and refining, they now have an essay that they are ready to turn in.

Is this essay the work of the student or is it not?

I have brought you to the million-dollar big-time unanswered unresolved question.

Let’s do some quick background about legal rights and infringement. This is a topic I’ve covered quite a bit, such as the link here and the link here, for example.

You likely already know something about copyrights and what is known as Intellectual Property (IP). Someone that has a copyrighted story is supposed to retain various legal rights associated with that story. They do not have a completely ironclad all-encompassing semblance of legal rights. There are exclusions and exceptions.

One of the toughest issues about infringing on someone’s copyrighted material is the closeness of what you might have in comparison to the original source. Perhaps you’ve read or seen news stories about famous singers and their lyrics, whereby someone else wrote a song with seemingly similar lyrics and whether this was legally proper or not.

I had earlier mentioned that usually, the generative AI app doesn’t produce an essay that is a carbon copy of other materials that it was earlier trained on via examining content on the Internet. The chances are that the material is generalized and all fuzzed together such that it no longer closely resembles whatever the source content consisted of.

We will have to wait and see how the legal process deals with this. If a generative AI app produces an artwork that is visually obviously akin to some sourced artwork, we probably would lean toward accusing the AI and the makers of the AI of having violated the copyright associated with the original work. We can see it with our own eyes.

When it comes to essays, this can be trickier. The obvious instances are when whole sentences and paragraphs are word-for-word identical. We can all see that. But when the wording differs with a modicum of differences, we get into gray areas.

How far off from the original sourced material does the newly crafted material have to be in order to declare that it is a bona fide original on its own merits?

That’s a weighty question.

Let’s tie this to the student that uses the generative AI app for their essay.

Pretend for the moment that a particular essay generated by the AI app is going to be construed as an “original” essay. I am saying assume that it doesn’t violate in any apparent way any other preexisting essay or text narrative anywhere on earth.

The student then is starting with an original source of the material. As already indicated, the student edits and refines this material. Things reach a point whereby the original as produced by the AI app now differs from the refined version that the student has devised.

Is this cheating?

Maybe yes, maybe no.

You can argue that it is. The student started with the AI writing their essay for them. All that the student has done is mechanically played around with the essay. We expect the student to write the essay out of the air and use their own nogging to do so. It is clearly cheating to use the AI app to generate their baseline. Assign an “F” grade to the student.

Not so fast. You can argue it isn’t cheating. The student has recrafted the source material. If a comparison between the AI app-produced essay and the student-refined version is a big enough difference, we would say that the student wrote the essay. Admittedly, they used other material in doing so, but can’t you say the same if they used an encyclopedia or some other source? This student deserves an “A” grade for having composed an essay via their own wits (notwithstanding having referenced other materials to do so).

Teachers are going to be caught in the middle of this already vexing question.

One approach is that a teacher might state categorically that the students must list all referenced materials, including whether or not a generative AI app was used. If a student fails to forthrightly list the generative AI as a reference, and if the teacher finds out that they failed to list it, the student summarily gets an “F” grade on the assignment. Or, perhaps some schools will consider this to be an act of cheating that causes the student to get an automatic flunk. Or maybe expelled. We’ll have to see how far schools go on these matters.

In general, we are heading to a topsy-turvy world of Intellectual Property and legal ownership of works such as essays (text), art (images), and video, including:

  • Some will seek legal redress from generative AI makers as to the sources of content that were used by the AI to generate the produced output.
  • Some will take the output of generative AI and consider the result to be their own owned works, and then try to seek legal redress from anyone that violates their “original” work.
  • This could cycle around, such that someone produces output from generative AI, which gets posted on the Internet, and then some other generative AI comes along and uses this in its training of producing akin works.

Turning A Negative Into A Positive

All this talk about the badness of generative AI when it comes to student cheating is perhaps clouding our minds, some exhort.

Take this in a different direction.

Are you sitting down?

Maybe teachers ought to consider purposely having students use generative AI as part of the learning process on how to write essays.

I’ve previously written about the so-called dual uses of AI, see the link here. The notion is that sometimes an AI system can be used for bad and sometimes it can be switched around and used for good. The worrisome aspect is when someone writes AI for good and is blissfully unaware of how easily their AI can be turned into the specter of badness. Part of Ethical AI is the realization that AI ought to be devised so that it cannot be turned overnight into a curse. This is an ongoing concern.

Back to the generative AI for producing essays.

I earlier brought up the concept that a student might be able to learn about writing by looking at written works that already exist. This makes abundant sense. Basically, the more that you read, the chances are that you are expanding your mental semblance toward being able to write. As stated earlier, you still need to do the writing, since all the reading in the world isn’t necessarily going to get you to be a good writer if you don’t practice the act of writing.

We could use generative AI to foster this reading-and-writing coupling. Have a student intentionally use generative AI. The AI produces an essay. The student is given the assignment to critique the AI-produced essay. Next, the student is assigned to write a new essay, perhaps on a different topic, but can use the structure and other general elements of the earlier AI-generated essay.

This might be even more productive, some suggest, for students than simply reading books or other texts by writers that the student has no access to “interact” with. With the AI app, the student could try rerunning and producing the initial essay by using a multitude of prompts, one after another. The student might tell the AI to write a barebones essay on Lincoln. Next, the student asks for a lengthy essay on Lincoln that is written in an informal voice. After looking that over, the student indicates to the AI app to produce a highly formalized version of the Lincoln essay. Etc.

The assertion made is that this could materially aid a student in learning about writing and how writing can take place.

A recent research paper proposes this very point: “The authors of this paper believe that AI can be used to overcome three barriers to learning in the classroom: improving transfer, breaking the illusion of explanatory depth, and training students to critically evaluate explanations” (in a paper entitled “New Modes of Learning Enabled by AI Chatbots: Three Methods and Assignments”, Dr. Ethan Mollick and Dr. Lilach Mollick, Wharton School of the University of Pennsylvania & Wharton Interactive, December 12, 2022)

For example, they point out that improving learning transfer might happen this way: “AI is a cheap way to provide students with many examples, some of which may be inaccurate, or need further explanation, or may simply be made up. For students with foundational knowledge of a topic, you can use AI to help them test their understanding, and explicitly push them to name and explain inaccuracies, gaps, and missing aspects of a topic. The AI can provide an unending series of examples of concepts and applications of those concepts and you can push students to: compare examples across different contexts, explain the core of a concept, and point out inconsistencies and missing information in the way the AI applies concepts to new situations” (ibid).

It’s akin to the old refrain, if you can’t beat them, join them.

Turn the generative AI into an educational tool.

Yikes, comes the quick response.

You are putting the fox into the chicken coop. Students that had no idea what generative AI is are now going to be shown it, openly, by the overt actions of a teacher and their schools. If the students were clueless about the opportunities of cheating, you are putting it directly into their faces and their hands.

It seems entirely repulsive that those in authority would introduce students to a means of cheating. You will forever hence put the most honest of students into the realm of cheating temptations. Everybody will have access to the cheating machine. They are told to do so. No need to hide it. No need to pretend that you aren’t using generative AI. The school and the teacher made you use it.

The rejoinder to this is that you have to blindly and ignorantly have your head in the sand to think that students aren’t going to become familiar with generative AI. While you are foolishly pretending they don’t know about it, they are scurrying outside of school to use it. Your better choice is to introduce the thing to them, discuss what it can and cannot be used for, and bring a bright shiny light to the whole conundrum.

It’s quite a doozy.

For those of you that are doing research on educational innovations of technology, you might want to take a look at generative AI and how it might change the nature of educational approaches and impact student learning. It is coming soon enough.

Using Detection To Rescue Us From Ruin

Switch hats and let’s consider digital artwork for a moment.

If you create a piece of digital art, you might want to mark it in some manner so that you can, later on, discern if someone has opted to use or reuse your artistry. A simple way to do this consists of changing some of the pixels or dots in your digital artwork. If you do a few here or there, the look of the artwork will still seem to be the same to the eyes of humans. They won’t notice those pixels that are teensy and have been set to some special color that only can be seen upon close inspection via digital tools.

You might know of these techniques as being a form of watermarking. Just as in the olden days there were attempts to watermark paper-based materials and other non-digitized content, we have gradually seen the rise of digital watermarks.

A digital watermark might be hidden in the image of a digital artwork. If that might seem intrusive to the image, you can try embedding the watermark into the file that contains the digital artwork (the so-called “meta-data” of the digital work).

There is a cat-and-mouse game that can arise.

Some evildoer comes along and they discover your digital watermark. They remove it. Now, they can seemingly freely use your digital artwork without worry that you’ll be able to, later on, poke into it and showcase that clearly it is a rip-off of your efforts. Those scoundrels!

We need to ratchet up the digital watermark, which we can do via the use of cryptographic techniques and technologies. Think of secreted messages and encoding.

The idea is that we encode the digital watermark so that it is hard to find. It is also potentially hard to remove. We could even try to ensure that software that will display or allow the use of the digital artwork has to first check and see that a valid encoded digital watermark exists in the work, else it is considered an improper copy. Caught you red-handed.

Can we do the same for generative AI that produces text?

A gauntlet has been laid down. The problem though can be tougher to some degree than when considering digital watermarks for artwork.

Here’s why.

Assume that the only place you can place the watermark is directly into the text itself. I say this because the text that is generated doesn’t necessarily go into a file. The text is just text. You can cut and paste it from the generative AI tool. In this sense, there is usually no meta-data or file into which the watermark can be embedded.

You have to focus solely on the text. Pure text.

One avenue would be to sneakily have the generative AI produce the text in a manner that can be traced. As a crude but impractical example, imagine that we decided to start every third sentence with the word “And” at the beginning of the sentence. We would still generate a seemingly entirely fluent essay. The only trickery is that every third sentence starts with our chosen magical word. Nobody else knows what we are up to.

A student uses generative AI to produce the assigned essay about Lincoln. The student takes it directly from the AI app and emails it to the teacher. Turns out that the student waited until the last moment and was up against the published deadline. No time to review the essay. Just send it and hope for the best.

The teacher looks at the essay. Suppose we have told her that our watermark consists of the magical word used at the start of every third sentence. The teacher detects that this is the case in this submitted essay. Though there is perhaps an incredibly slender chance that the student wrote the essay and perchance likes to use this particular word at the start of every third sentence, I think we can reasonably agree that this is highly unlikely and instead the student probably used the generative AI to produce the essay.

Do you see how that works?

I trust that you do.

The problem now is how to come up with a watermark that isn’t quite so obvious. A student might notice that the sentences seem to oddly be using a particular word. They might guess what is going on. In turn, the student might move around sentences and do some rewording. This then pretty much sinks this particular watermark since the essay no longer is readily spotted as being written by the generative AI.

The cat-and-mouse game is once again pressing ahead.

We need to produce fluent text that somehow contains a “watermark” in a manner that cannot be easily discerned. Further, if possible, the watermark should continue to persist even if the essay is slightly revised. A whole-hog revision is probably not going to allow the watermark to survive. But we want some redundancy and resiliency so that the watermark will preferably be detectable even if some amount of changes to the text area are made.

A researcher that is doing some work for the company that makes ChatGPT (the AI app by OpenAI) is exploring some interesting cryptographic efforts along these watermarking considerations. Scott Aaronson is a Professor of Computer Science at the University of Texas at Austin and he recently gave a talk about some of the work taking place (a transcript is posted on his blog).

Consider this excerpt in which he briefly explains the existing approach: “How does it work? For GPT, every input and output is a string of tokens, which could be words but also punctuation marks, parts of words, or more—there are about 100,000 tokens in total. At its core, GPT is constantly generating a probability distribution over the next token to generate, conditional on the string of previous tokens. After the neural net generates the distribution, the OpenAI server then actually samples a token according to that distribution—or some modified version of the distribution, depending on a parameter called ‘temperature.’ As long as the temperature is nonzero, though, there will usually be some randomness in the choice of the next token: you could run over and over with the same prompt, and get a different completion (i.e., string of output tokens) each time.”

As noted, there is a designated amount of randomness as to which words will be placed next into the essay that is being derived by the ChatGPT app. That also explains the earlier point made that each essay is likely to be somewhat different even if on the same topic. A purposeful use of a random selection approach that is within particular bounds is running under the hood during the essay generation.

We now get to the juicy part, the cryptographic commingling: “So then to watermark, instead of selecting the next token randomly, the idea will be to select it pseudo-randomly, using a cryptographic pseudorandom function, whose key is known only to OpenAI. That won’t make any detectable difference to the end user, assuming the end user can’t distinguish the pseudorandom numbers from truly random ones. But now you can choose a pseudorandom function that secretly biases a certain score—a sum over a certain function g evaluated at each n-gram (sequence of n consecutive tokens), for some small n—which score you can also compute if you know the key for this pseudorandom function.”

I realize that might seem somewhat technologically jampacked.

The essence is that the produced essay will appear to be fluent and you won’t be able to readily discern by reading the essay that it contains a digital watermark. To figure out whether a given essay does contain a watermark, you would need to feed the essay into a specially devised detector. The program that does the detection would compute a value based on the text and be able to compare that to a stored key. In the approach being described, the keys would be held by the vendor and not otherwise be available, thus, assuming the keys are kept secret, only the anointed detection program could calculate whether the essay was likely derived from ChatGPT in this instance.

He goes on to acknowledge that this is not foolproof: “Now, this can all be defeated with enough effort. For example, if you used another AI to paraphrase GPT’s output—well okay, we’re not going to be able to detect that. On the other hand, if you just insert or delete a few words here and there, or rearrange the order of some sentences, the watermarking signal will still be there. Because it depends only on a sum over n-grams, it’s robust against those sorts of interventions.”

A teacher might be granted access to a detector program that would check student essays. Suppose the matter is relatively easy in that the teacher has the students email their essays to the teacher and the automated detector. The detector app then informs the teacher as to the likelihood of the essay being crafted by ChatGPT in this instance.

Now, if the detector is openly available to just anyone, you would have “overachieving” student cheaters that would simply run their essays into the detector and make a series of changes until the detector indicated a low probability that the essay was derived by the generative AI. More of the cat-and-mouse. Presumably, the detector has to be kept tightly protected by password usage, or some other means or methods of dealing with cryptographic approaches are needed (there are a variety of both key-based and keyless-focused methods that can be utilized).

A teacher might be faced with the possibility of dozens or hundreds of generative AI apps available for use on the Internet. In which case, trying to get all of those to use some digital watermarking and having to feed an essay into all of them, well, it just gets more beguiling and logistically complicated.

No More Essays Outside Of The Classroom

A doom and gloom perspective is that maybe teachers will have to abandon the use of outside essay writing. All essays must be written only within the controlled environment of a classroom.

This has lots and lots of problems.

Suppose a student were to normally require ten hours to write a particular full-blown essay that is a class project. How would this be done inside a classroom? Are you going to parcel it out and have the student write a small piece of the essay over a series of days? Think about the difficulties this presents.

Some claim that perhaps the matter is being overblown.

Teachers should do as they have always done about plagiarism by students. Upfront the teacher declares that plagiarism is a serious cheating concern. Emphasize that the use of generative AI, in any fashion, will be considered a cheating action.

Make penalties that carry significant weight, such as a low grade, a flunked class, or expulsion from a school if it gets that far. Require students to attest in writing for each outside essay assignment that what they have turned in is their work (done so without aids such as generative AI, copying from the Internet, using fellow students, using a parent, paying to get it done, and so on). Also, require that the students list any online tools used in the preparation of the work, including specifically having to note especially any generative AI usage.

The teacher might or might not use a detector app to try and discern whether the submitted essay is likely by a generative AI app. This is a potentially burdensome step, depending on how easy the detectors are to use and access.

Teachers should presumably already be taking action about ferreting out whether outside written essays seem legitimate. By doing in-class essay writing, there is a chance to compare and contrast, realizing though that the time for writing in a classroom is less and might also be hampered by the restriction of not allowing access to online reference materials.

The gist is that we ought to not take the route of abruptly chucking out the use of outside essay writing. Some would deplore this as a rash act and one that seems reminiscent of throwing out the baby with the bathwater (an old saying, perhaps worth retiring).

If outside writing is entirely discontinued as a learning activity, there are likely severe and prolonged downsides to removing this seemingly everyday educational activity from the curriculum. There is a tradeoff involved. How many students will cheat, despite all of the above-mentioned checks and balances? How many students won’t cheat and therefore will continue to use a beneficial educational approach to advance their writing prowess?

In theory, hopefully, the percentage of cheaters will be small enough such that outside writing is still meritorious for the preponderance of students.

Conclusion

AI can be quite a headache.

For teachers, AI can be both a blessing and a curse. Either way, it means that teachers need to know about AI, along with how to contend with AI twists and turns associated with their teaching activities, which is yet another added weight on their already overextended backs and shoulders. Shoutout to teachers everywhere.

Maybe we can wish AI to go away.

Nope.

You see, we aren’t going to turn back the clock and expunge generative AI. Anyone that calls for this is a dreamer. And, as an aside, I am using the word “And” as the first word of the third sentence of this paragraph (oops, giving away the key!), generative AI is here to stay.

Here’s a prompter to get your heated discussions going: Generative AI is going to become more pervasive and have even more astounding and unnerving capabilities.

Mic drop.

Final thought for now.

Shakespeare famously wrote that “To be, or not to be: that is the question.”

I assure you that generative AI is going to be. It already is.

We have to figure out how we want generative AI to enter into our lives, and how society will opt to shape and guide such usage. If you ever needed a reason for thinking about AI Ethics and AI Law, perhaps generative AI will prompt you toward seeking to know what we are, even if we do know not what we may be (hidden Shakespeare reference).

Source: https://www.forbes.com/sites/lanceeliot/2022/12/18/enraged-worries-that-generative-ai-chatgpt-spurs-students-to-vastly-cheat-when-writing-essays-spawns-spellbound-attention-for-ai-ethics-and-ai-law/