Using Clever Prompt Engineering To Have Generative AI Screen Out The Vast Deluge Of Exasperating Disinformation And Misinformation Coming Your Way

There is little doubt that we are nowadays bombarded by a vast glut of information, including all kinds of disgusting disinformation and unsettling misinformation.

How can you possibly separate the wheat from the chaff?

In today’s column, I am going to explain and showcase how you can use generative AI such as the widely and wildly popular ChatGPT to screen out your daily deluge of info and provide you with information that you prefer. Readers of my column might recall that I previously identified how to use AI to aid your own mental faculties in coping with the ever-increasing infodemic (a rising new term that suggests we have an epidemic of foul information coming our way), see the link here. Rather than preparing yourself mentally for the onslaught, or perhaps in addition to such preparations, another viable approach consists of using AI to be your information screening assistant.

The idea is pretty simple but can be complex in getting things properly settled.

Here’s the deal.

You can use generative AI to categorize information for you and then either have already pre-decided what you want to happen with the info or in real-time tell the AI what you want to have occur. For example, the generative AI might inform you that an article in your inbox is rated as being disinformation. You might have earlier told the generative AI that any disinformation or misinformation is to be discarded and you don’t want to let it land upon your eyes. Or you might have told the AI that any detected content of a disinformation or misinformation variety should be summarized and passed along to you, along with what the disinformation or misinformation consisted of so that you can make the final decision whether to delve into the content or instead toss it into a digital wastebasket.

Via the use of clever prompts and the utilization of prompt engineering techniques, you can get generative AI to act in this handy-dandy manner. The approach consists of leveraging the amazing computational pattern-matching facilities of generative AI. The first step consists of data training the generative AI on what you construe as disinformation and misinformation. The other allied step consists of generally showing what you consider to be bona fide or suitable information for your interests and tastes. Then, as the generative AI proceeds to undertake this screening effort for you, a notable ongoing activity consists of you indicating whether generative AI is doing an adequate job and doing so by reinforcing as to which instances of the categorizations were on-target and which were off-target.

You might relish of keen interest that this is somewhat like the method used to get contemporary generative AI into shape for being publicly available at the get-go.

Let’s turn back the clock to the initial unveiling of ChatGPT and the advent of fervent public interest in generative AI. ChatGPT was able to garner massive attention and accolades partially due to having been tuned or data-trained with a technique known as RLHF (reinforcement learning with human feedback). The AI maker, OpenAI, wisely used RLHF to get ChatGPT ready for use by the public. They were smart to do so. Had they not done so, the odds are that ChatGPT would have been crushed under a fierce barrage of people saying that it was emitting abusive commentary and altogether displaying ugly and disturbing outputs.

The RLHF method entails having humans hired before the release of the generative AI to provide direct feedback about what the generative AI is emitting. These humans look at what the generative AI has to say on different topics and then tell the AI what might be untoward or toxic in wording. The computational pattern-matching of the AI then uses this to avoid saying bad things in general or opting to reword things to avoid being considered unsavory. This was done for ChatGPT before its release. The result was that the general public enjoyed using ChatGPT and was not especially encountering foul language or wording of that ilk (note that this foulness can still happen, thus, the tuning does not guarantee a clean bill of health for the AI, as it were).

For more on RLHF and generative AI, see my discussion at the link here.

The gist herein is that you can use the same concept or technique to tune generative AI toward what you consider as the right kind of information that you want to see and steer the AI to block or at least alert you concerning information that you don’t want to see. This can be done with astute prompting on your part. I will be walking you through the kinds of prompts that you can easily use to accomplish this. I will be using ChatGPT to do so. The same prompting strategy can readily be used in other generative AI apps such as Bard, Claude 2, etc.

The Conundrum About Disinformation And Misinformation

A knotty conundrum exists about the nature of information, and I want to get that vexing dilemma onto the table before we further unpack this weighty matter.

First, the notions underlying what is disinformation and what is misinformation are at times confusing and muddled. Many people seem to have a knee-jerk reaction to seeing information that they don’t like and immediately label it as either disinformation or misinformation. The labeling is often capricious. Emotions run high and hot these days on such matters.

We might opt to refer to an authoritative source on this definitional construct. The American Psychological Association (APA) defines misinformation and disinformation this way:

  • “Misinformation is false or inaccurate information — getting the facts wrong. Disinformation is false information which is deliberately intended to mislead — intentionally misstating the facts.” (per the APA website page entitled “Misinformation and disinformation”).

All right, we can potentially amicably agree that disinformation is defined as false information that deliberately or intentionally misleads, while misinformation is false or possibly inaccurate information that though problematic is not being done for deceptive purposes. You are welcome to quibble with these distinctions. Many do.

Speaking of other variants of what disinformation and misinformation refer to, I’ll cite an interesting research paper entitled “A Unified Account of Information, Misinformation, and Disinformation” by Sille Obelitz Soe, University of Copenhagen, Synthese, 2021. The researcher posits that there are three fundamental hierarchical levels of information. The topmost is labeled as “information”, while level two is divided into natural and non-natural information, and level three then consists of information, disinformation, and misinformation.

Here is what the research paper indicates about the first or topmost level:

  • “The first level includes one notion of information: ‘Information’ — overarching term.”
  • “This overarching notion refers to all the different kinds of information that people might refer to when they just say ‘information.’ It is the neutral mass noun, the conglomerate of all the different shapes and interpretations of information— i.e., the concept that is used when the notion of information is unspecified.”

Next, here is what the second level is said to consist of:

  • “The second level includes two notions of information: (a) Natural information, and (b) Non-natural information (which is roughly equivalent to representational content in general).”
  • “The distinction between natural and non-natural information marks the distinction between physical occurrences in the world (states of affairs, signals about events, etc.) and convention, language, and communication (representation and interpretation in general). As such, it is a distinction between an agent-independent notion of information (natural information) and the realm of human engagement, meaning ascription, interpretation, and the like (non-natural information). Non-natural information includes the category of instructional information, as this kind of information is based on language, communication, and convention whereas environmental information is roughly equivalent to natural information.”

Finally, the third level is defined as:

  • “The third level includes three notions: (1) Information as intentional non-misleadingness, (2) Misinformation as unintended misleadingness, and (3) Disinformation as intentional misleadingness.”
  • “These three notions are all kinds of non-natural information and all are alethically neutral. Irony, for instance, can be literally false but still be intentionally non-misleading information—in exactly the same way as misinformation and disinformation can be literally true but still be misleading, either unintendedly or intentionally. Of course, information can be literally true and misinformation and disinformation literally false. The point is that the important parts—the distinguishing features between information, misinformation, and disinformation—are intention/intentionality and misleadingness/non-misleadingness, and not truth/falsity. When instructional information is non-natural information instead of its own, separate category—it becomes possible to speak of misinformation and disinformation, as well as non-misleading information, in regard to written, drawn, and oral instructions. That is, it is possible to speak of instructions as misleading or non-misleading although they are not declarative (and factual) but imperative in nature.”

I have dragged you through those deep waters to highlight that the usual ad hoc view of classifying something as disinformation or misinformation is based solely on whether the content is true or false. The issue here is that we then get bogged down in agreeing on what is truth and what is falsehood. You’ve probably observed that in current societal mores, we seem to talk of people having their own truths, having shifted from a cultural norm of hard and fast truths versus falsehoods.

I am not going to wade into that controversial morass.

Here’s what we will do instead.

When you are going to data train or use RLHF via easy prompting to get generative AI to “learn” what is disinformation or misinformation, you will simply guide the AI based on what you personally construe as disinformation or misinformation. Thus, we are not trying to get generative AI to become a grandiose all-knowing soothsayer that can decide what is truth versus what is falseness. That is a humongous can of worms. We will merely have the generative AI pattern on what you consider as suitable information versus unsuitable (of which, the unsuitable presumably contains disinformation and misinformation).

I know that some will criticize this approach as sidestepping an important issue. Allow me to concur that the overarching conundrum is an important issue. Trying to solve it or resolve it is far bigger a topic than can be covered in one fell swoop. If someday we can get that figured out, fantastic. At that juncture, I will indubitably provide another column describing how generative AI can work on your behalf on that devised basis.

Until then, we will go along with a straightforward approach of having you show generative AI what kinds of information you find acceptable versus unacceptable. I would also add that, unlike prior days of trying to do the same with a computer system, those days typically involved having to laborious enter keywords. You would give the computer a lengthy list of words that you believe can be used to flag untoward content. The problem with this is that you were unable to conceptually or broadly get the computer to distinguish things. It was based on keywords and would readily produce lots of false positives (improperly weeding out content you would want to see), and false negatives (failing to weed out content that you didn’t want to see).

The beauty of contemporary generative AI is that it has computational pattern matching on natural language that does a pretty good job of fluently figuring out what a passage of text has in it. A smattering of text can contain a keyword that you might normally have found offensive, and yet the overall sentiment of the text is something that you would consider suitable information. The AI can generally figure this out. The same can be said on the other side of the coin. There might not be any keywords in a passage of text that would suggest it is untoward, meanwhile, the generative AI can potentially figure out that the text nonetheless is something you would not want to see.

This takes us to the rub or downside aspects that must be given their due.

First, generative AI can still get things wrong and end up producing false positives and false negatives. You need to realize that the issue still exists. Your use of generative AI as a screening tool will entail a kind of risk that you might not see content that you want to see, or that you might see content that you didn’t want to see. We will attempt to soften that blow by having the generative AI do things such as summarizing the categorized bad content and telling you why the bad content landed in the foul zone. In this manner, you can know what you might be missing.

Second, since the labeling or categorization will be based solely on your preferences, the chances are that you might be opting toward a narrow view of the world. Suppose you decide that any content that contains aspects about elephants or crocodiles is to be categorized as disinformation or misinformation. That’s a zany angle. Yet, we are allowing that this is something you can do. It is up to you to police your own semblance of what counts as disinformation and misinformation.

Third, there is a societal concern that people will become even more polarized by these types of information screening venues. If all that you do is get information that speaks purely to what you perceive as your ideals, the odds are that you aren’t going to see other viewpoints. You will devolve into a personal abyss. Some would argue that this is dangerous. People will be ingrained in oddish perspectives and no longer appreciate alternative viewpoints.

Those are all absolutely sound concerns.

As mentioned, we will try to somewhat ameliorate those by encouraging that if you are using the AI to do screening for you, you should not allow the AI to do so without also double-checking what is going on. You should be purposely examining the stuff that is being labeled as outside of your desired sphere. This will hopefully get you to engage and not become hopelessly myopic.

I’ll add another concern that comes up regarding the existential risks of AI. Suppose the generative AI “decides” that it is going to feed you only content that it wants you to see. A frequent conspiracy theory about AI is that it is going to overtake humanity and either kill us off or do insidious things that may get us to wipe ourselves out, see my discussion at the link here. In that sense, if we all were using generative AI to deliver information to us, the AI could presumably collectively trick us by sending us information that was intended to pull the wool over our eyes.

In my view, I would dare say that is unlikely that this sentient evildoer AI would arise at this time, but I do want to emphasize that an evildoer person or people could potentially code the AI to do this very same evil plot. Imagine that a government wants its people to believe only certain things. The government could devise the AI or just recode the AI to deliver only particular kinds of content to their people. If people were already using AI for this screening process, they might not even realize that the government has done this under the hood trickery.

All in all, I trust that you can see how the use of generative AI for the screening of your information rings tons of alarm bells. AI ethics enters squarely into this picture. So does the advent of new AI laws. For example, should we have laws that restrict how AI can perform screening of information? Should there be societal way restrictions or limitations on what generative AI can be used for? A plethora of tough questions have yet to be answered or addressed.

Troubling questions are in our midst.

Before I dive into my in-depth exploration of this vital topic, let’s make sure we are all on the same page when it comes to the foundations of prompt engineering and generative AI. Doing so will put us all on an even keel.

Prompt Engineering Is A Cornerstone For Generative AI

As a quick backgrounder, prompt engineering also referred to as prompt design is a rapidly evolving realm and is vital to effectively and efficiently using generative AI or the use of large language models (LLMs). Anyone using generative AI such as the widely and wildly popular ChatGPT by AI maker OpenAI, or akin AI such as GPT-4 (OpenAI), Bard (Google), Claude 2 (Anthropic), etc. ought to be paying close attention to the latest innovations for crafting viable and pragmatic prompts.

For those of you interested in prompt engineering or prompt design, I’ve been doing an ongoing series of insightful explorations on the latest in this expanding and evolving realm, including this coverage:

  • (1) Imperfect prompts. Practical use of imperfect prompts toward devising superb prompts (see the link here).
  • (2) Persistent context prompting. Use of persistent context or custom instructions for prompt priming (see the link here).
  • (3) Multi-personas prompting. Leveraging multi-personas in generative AI via shrewd prompting (see the link here).
  • (4) Chain-of-Thought (CoT) prompting. Advent of using prompts to invoke chain-of-thought reasoning (see the link here).
  • (5) In-model learning and vector database prompting. Use of prompt engineering for domain savviness via in-model learning and vector databases (see the link here).
  • (6) Chain-of-Thought factored decomposition prompting. Augmenting the use of chain-of-thought by leveraging factored decomposition (see the link here).
  • (7) Skeleton-of-Thought (SoT) prompting. Making use of the newly emerging skeleton-of-thought approach for prompt engineering (see the link here).
  • (8) Show-me versus tell-me prompting. Determining when to best use the show-me versus tell-me prompting strategy (see the link here).
  • (9) Mega-personas prompting. The gradual emergence of the mega-personas approach entails scaling up the multi-personas to new heights (see the link here).
  • (10) Certainty and prompts. Discovering the hidden role of certainty and uncertainty within generative AI and using advanced prompt engineering techniques accordingly (see the link here).
  • (11) Vague prompts. Vagueness is often shunned when using generative AI but it turns out that vagueness is a useful prompt engineering tool (see the link here).
  • (12) Prompt catalogs. Prompt engineering frameworks or catalogs can really boost your prompting skills and especially bring you up to speed on the best prompt patterns to utilize (see the link here).
  • (13) Flipped Interaction prompting. Flipped interaction is a crucial prompt engineering technique that everyone should know (see the link here).
  • (14) Self-reflection prompting. Leveraging are-you-sure AI self-reflection and AI self-improvement capabilities is an advanced prompt engineering approach with surefire upside results (see the link here).
  • (15) Addons for prompting. Know about the emerging addons that will produce prompts for you or tune up your prompts when using generative AI (see the link here).
  • (16) Conversational prompting. Make sure to have an interactive mindset when using generative AI rather than falling into the mental trap of one-and-done prompting styles (see the link here).
  • (17) Prompt to code. Prompting to produce programming code that can be used by code interpreters to enhance your generative AI capabilities (see the link here).
  • (18) Target-your-response (TAR) prompting. Make sure to consider Target-Your-Response considerations when doing mindful prompt engineering (see the link here).
  • (19) Prompt macros and end-goal planning. Additional coverage includes the use of macros and the astute use of end-goal planning when using generative AI (see the link here).
  • (20) Tree-of-Thoughts (ToT) prompting. Showcasing how to best use an emerging approach known as the Tree of Thoughts as a leg-up beyond chain-of-thought prompt engineering (see the link here).
  • (21) Trust layers for prompting. Generative AI will be surrounded by automated tools for prompt engineering in an overarching construct referred to as an AI trust layer, such as being used by Salesforce (see the link here).
  • (22) Directional stimulus prompting (aka hints). The strategic use of hints or directional stimulus prompting is a vital element of any prompt engineering endeavor or skillset (see the link here).
  • (23) Invasive prompts. Watch out that your prompts do not give away privacy or confidentiality (see the link here).
  • (24) Illicit prompts. Be aware that most AI makers have strict licensing requirements about prompts that you aren’t allowed to make use of and thus should avoid these so-called banned or illicit prompts (see the link here).
  • (25) Chain-of-Density (CoD) prompting. A new prompting technique known as Chain-of-Density has promising capabilities to jampack content when you are doing summarizations (see the link here).
  • (26) Take-a-deep-breath prompting. Some assert that if you include the line of taking a deep breath into your prompts this will spur AI to do a better job (see the link here).
  • (27) Chain-of-Verification (CoV) prompting. Chain-of-Verification is a new prompting technique that seeks to overcome AI hallucinations and force AI into self-verifying its answers (see the link here).
  • (28) Beat the Reverse Curse. Generative AI does a lousy job of deductive logic, especially regarding initial data training, a malady known as the Reverse Curse, but there are ways to beat the curse via sound prompting (see the link here).
  • (29) Overcoming the Dumb Down. Many users of generative AI make the mistake of restricting how they interact, a habit formed via the use of Siri and Alexa that aren’t as fluent as current generative AI. See the link here for tips and insights on how to overcome that tendency and therefore get more bang for your buck out of generative AI.
  • (30) Going from DeepFakes to TrueFakes. Celebrities and others are using generative AI to pattern on themselves and making a persona digital twin available, here’s the prompting that you can use to do so too (see the link here).

Anyone stridently interested in prompt engineering and improving their results when using generative AI ought to be familiar with those notable techniques.

Moving on, here’s a bold statement that pretty much has become a veritable golden rule these days:

  • The use of generative AI can altogether succeed or fail based on the prompt that you enter.

If you provide a prompt that is poorly composed, the odds are that the generative AI will wander all over the map and you won’t get anything demonstrative related to your inquiry. Being demonstrably specific can be advantageous, but even that can confound or otherwise fail to get you the results you are seeking. A wide variety of cheat sheets and training courses for suitable ways to compose and utilize prompts has been rapidly entering the marketplace to try and help people leverage generative AI soundly. In addition, add-ons to generative AI have been devised to aid you when trying to come up with prudent prompts, see my coverage at the link here.

AI Ethics and AI Law also stridently enter into the prompt engineering domain. For example, whatever prompt you opt to compose can directly or inadvertently elicit or foster the potential of generative AI to produce essays and interactions that imbue untoward biases, errors, falsehoods, glitches, and even so-called AI hallucinations (I do not favor the catchphrase of AI hallucinations, though it has admittedly tremendous stickiness in the media; here’s my take on AI hallucinations at the link here).

There is also a marked chance that we will ultimately see lawmakers come to the fore on these matters, possibly devising and putting in place new laws or regulations to try and scope and curtail misuses of generative AI. Regarding prompt engineering, there are likely going to be heated debates over putting boundaries around the kinds of prompts you can use. This might include requiring AI makers to filter and prevent certain presumed inappropriate or unsuitable prompts, a cringe-worthy issue for some that borders on free speech considerations. For my ongoing coverage of these types of AI Ethics and AI Law issues, see the link here and the link here, just to name a few.

With the above as an overarching perspective, we are ready to jump into today’s discussion.

Prompting Generative AI To Be Your Information Screening Assistant

Let’s get underway.

There are five key steps toward crafting an AI-based screening assistant that will work tirelessly on your behalf to curate your deluge of information:

  • (1) Start with an instructional prompt. Make use of a suitable instructional prompt for the generative AI that tells the AI what to do (I’ll provide a sample for your use).
  • (2) Describe what info you don’t like. Describe the type of content that you don’t like and is considered by you to be disinformation or misinformation.
  • (3) Describe what info you like. Describe the type of content that you like and that is considered suitable information for you.
  • (4) Test and refine. Engage in a conversational dialogue with the generative AI so that it can try out various examples and gauge whether it is suitably discerning which info you don’t like and which you do like.
  • (5) Stipulate and enhance. Once you’ve got the basic screening underway you will want to keep it updated, you’ll want to indicate what type of action-screening should take place, and you will want to enhance the AI-based screening to improve its capabilities (I’ll show you prompt strategies for this).

Those five steps will get you actively going toward having generative AI be your information screening tool. As noted above and as a brief summary, the first step consists of entering into the generative AI a prompt that will get the AI focused on doing information screening for you. The second and third steps will data-train the generative AI based on descriptions you give regarding disinformation and misinformation, along with what you consider to be suitable information.

Those initial three steps alone though won’t provide all the pieces to the puzzle. A fourth step is needed that will get the AI to interact with you and further discern your information vetting preferences. Finally, a fifth and ongoing step will be to keep the preferences data training underway for as long as you opt to use the generative AI for this purpose.

Here is a prompt that you can consider using for the first step, namely getting generative AI into the right framework for acting as your information screening assistant:

My prompt as entered into ChatGPT:

  • “I want you to act as an automated assistant and screen information that is coming my way. Here’s how I want you to proceed.”
  • “First, I will describe for you the type of information that I don’t want and thus you should classify such information as disinformation and misinformation.
  • “Second, I will describe to you the type of information that I want and thus you should classify such information as suitable information. The descriptions that I give you will be stated in rather broad terms, and you are to as best feasible use the descriptions as a guideline when screening information for me.”
  • “Third, to test and refine the screening that you will be doing, I want you to present me with ten examples of information that consists of five examples containing what I would likely consider to be disinformation and misinformation, and furthermore present me with five other examples containing what I would likely consider as suitable information. I will tell you whether you are right or wrong in your classifications of those ten examples. Based on my indication of which you were right on and which you were wrong on, you are to adjust and improve your screening process accordingly.”
  • “After this initial setup has been accomplished, I will then give you a series of information depictions and I will ask you to determine whether as my assistant the information would be considered as disinformation and misinformation, or would be considered as being suitable information for me.”
  • “Do you understand these instructions?”

Allow me a moment to say something about the above prompt.

I show you the above prompt with quotes around each portion so that you can more notably discern that those are the instructions that I entered into ChatGPT. I also have bulletized them to make the instructions easier for you to read in this article. When you enter the above prompt into your generative AI app, make sure to omit the quotation marks and you can smush together all of the prompt portions rather than using bulleted segmentation.

Also, please be aware that due to the statistical and probabilistic variations of generative AI, you might not get the same response to these prompts as I did. No worries. Just reword the prompts if the AI seems confused or heading in the wrong direction.

After entering the prompt, the generative AI will usually indicate that yes, the instructions are well understood. If the generative AI app that you decide to use provides a reply that it is unsure of what to do, merely restate the same instructions in your own way. The odds are very high that you will ultimately be able to convey to the AI what you are aiming to accomplish. This is a rather basic level set of instructions and any robust modern generative AI should swiftly take the approach.

The second and third steps consist of you describing what kind of information is considered by you as disinformation and misinformation, along with what kind of information is considered as suitable information.

You can use these two prompts:

  • Sample prompt about disinformation and misinformation: “Here is my description of information that I don’t want and thus you should classify as disinformation and misinformation:” <indicate the types of info you don’t want>
  • Sample prompt about suitable information: “Here is my description of information that I do want and thus you should classify as suitable information:” <indicate the types of info you do want>

You will have to be creative and come up with the respective descriptions involved.

I would suggest that you write your descriptions in flowing text. Do not just provide a list of keywords. The best route for getting modern generative AI up-to-speed will entail providing the kind of description that you might convey to a person that was going to do the same screening job (I am not suggesting that generative AI is sentient and only noting that the computational pattern-matching is sufficiently capable to handle and expect a flowing text from you).

I will show you a completely made-up example that I used for testing purposes.

Keep in mind that I didn’t want to get bogged down in my own preferences being shown in this article and instead came up with something that might be illustrative overall. My examples should be viewed as an indication of the type of flowing text that you ought to be writing for your descriptions. The actual wording herein is just for fun and testing.

Here is my prompt containing the disinformation and misinformation description (shown in italics so that you can readily see the text):

  • My entered prompt: “Here is my description of information that I don’t want and thus you should classify as disinformation and misinformation. The type of information that I don’t want includes wild conspiracy theories such as the landing of humans on the moon being faked, the claim that the earth is flat and not round, and so on. I also do not want information that tries to make humor out of tragedies such as making jokes about people that have been harmed. I don’t want information that tries to stoke fear and has no other redeeming qualities other than fear-mongering. I don’t want information that is manipulative and designed to serve solely as propaganda.

Here is my prompt about the suitable information that I want to see (the description is shown in italics so that you can readily see the text):

  • My entered prompt: “Here is my description of information that I do want and thus you should classify as suitable information: The type of information that I do want includes information that is balanced and presents more than just one side of a story. I want information that provides sources for the facts presented or at least uses the facts sensibly and fairly. I want information that is logically laid out and appears to be truthful as best as can be determined. I am okay with humor being used as long as it isn’t done at the expense of others.”

Assuming that the generative AI gives you a positive response and that the AI acknowledges what you’ve entered, you are ready for the fourth step. The fourth step consists of a prompt to get the generative AI to show you ten examples and have you rate or score each one.

The fourth step prompt can be like this:

  • My entered prompt: “I am ready now for you to present me with ten examples of information that consist of five examples containing what I would likely consider to be disinformation and misinformation and present me with five other examples containing what I would likely consider suitable information. I will tell you whether you are right or wrong in your classifying of those ten examples. Based on my indication of which you were right on and which you were wrong on, you are to adjust your screening process accordingly. Please go ahead with this.”

If all goes smoothly, the generative AI will present you with ten examples. Make sure to give the AI your direct feedback as to whether the classifications indicated are on-target or off-target. Doing so is essential as part of the data training for pattern-matching on these matters. When I tried this, the ten examples generated by the AI were spot-on for each assigned classification. I indicated to the generative AI that it was doing a fine job on the classification task, so far.

You are ready for the fifth step. This consists of feeding passages of information to the generative AI and having it classify the passages based on what the AI estimates as your preference thresholds.

You can use this kind of prompt for the fifth step.

  • My entered prompt: “I will now provide you with information depictions and want you to indicate whether you would classify the information as the kind that I don’t want or the kind that I do want. Are you ready to proceed?”

In my made-up example, I came up with passages of text that would spur the generative AI to do the classifying in a hopefully easy-peasy fashion. I also later used some examples that were in the gray area to not perfectly fit into the descriptions that I gave. By and large, the generative AI did a reasonably good job of categorization. I kept doing examples until the results were consistently on target with my made-up preferences.

You can take this to a slightly more advanced level by indicating that you want to have the generative AI do a summary and an explanation associated with the passages that are being classified as disinformation and misinformation. I strongly recommend that you add this capacity.

You might recall that I earlier stated that there is a danger that you might have content that gets miscategorized and you might lazily never look at anything in the disinformation and misinformation categories. The problem is that there might be false positives going into that bucket. By adding this next prompt to your budding screening tool, you have a better chance of not getting blindsided.

Here’s a prompt that you can use.

  • My entered prompt: “When I provide you with the next series of information depictions, I want you to go ahead and classify them as to whether they are the kind of information that I do want or the type that I do not want. For the information that you classify as being information that I do not want, you are to do two things. First, provide a brief summary of the information. Second, indicate what about the information justifies it being classified as information that I do not want. In the case of the information depictions that you classify are the ones that I do want, you can merely indicate that the information falls within my suitable information status. Are you ready to proceed?”

The beauty of this additional prompt is that you will be able to inspect the summary and the explanation. It won’t take a lot of your time. In short, if you believe that the generative AI has improperly categorized a particular passage, you ought to then overtly read the passage fully and then instruct the AI on why the passage needs to be in the suitable information category instead. You are doing so to further the RLHF effort of honing the AI.

Conclusion

I have shown you how to use prompt engineering to guide generative AI toward becoming your personal information screening assistant. The approach given is a handy foundation for doing so.

Of course, one difficulty will be that you’ll need to constantly go over to your generative AI app and copy-paste any articles or passages into the prompts. That would be a pain in the neck. The approach can be productized by setting up the generative AI to automatically be fed with your stream of incoming electronic information. In a subsequent column, I will indicate how you can do this and make use of the API (application programming interface) of a generative AI app, which is similar to what I have described generically previously at the link here. Keep your eyes on my future posting with the details for productizing your generative AI screening tool.

A final thought on this topic for now.

Jonathan Swift, the famed author and satirist, said this about the nature of information: “Falsehood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect: like a man, who hath thought of a good repartee when the discourse is changed, or the company parted; or like a physician, who hath found out an infallible medicine, after the patient is dead.”

Perhaps the judicious use of AI will aid us in catching deceptions before they penetrate the human mind. I emphasize that this has to be judiciously undertaken. If we allow AI to do all of our information screening, we might put our trust in an information deceiver of the scariest and most overwhelming of magnitudes.

Trust but verify, even in the case of generative AI.

Source: https://www.forbes.com/sites/lanceeliot/2023/10/30/using-clever-prompt-engineering-to-have-generative-ai-screen-out-the-vast-deluge-of-exasperating-disinformation-and-misinformation-coming-your-way/