What You Need To Know About GPT-4 The Just Released Successor To Generative AI ChatGPT, Plus AI Ethics And AI Law Considerations

What is your usual reaction upon the release of a sequel to a major headline-grabbing blockbuster movie?

Some people go see the sequel and declare that it is as good if not even better than the original. Others might have extraordinarily high expectations and after viewing the newer film proclaim it as reasonably good though nothing to howl ecstatically about. There are some that will undoubtedly be exceedingly disappointed, no matter what the latest movie includes, and will summarily declare that the first movie was unabashedly heads and tails above the sequel.

That same range of reactions and emotions has come to the fore in the release yesterday of GPT-4 by AI maker OpenAI, taking place on Pi Day, namely 3.14 or March 14, 2023. Likely a coincidence of happening on the mathematician’s favorite pie-eating day, the GPT-4 unveiling did garner a lot of press attention and voluminous chatter on social media.

I will describe herein the major features and capabilities of GPT-4, along with making comparisons to its predecessor ChatGPT (the initial “blockbuster” in my analogy). Plus, there is a slew of really vital AI Ethics and AI Law considerations that go along with generative AI, including and perhaps especially in the instance of GPT-4 and ChatGPT due to their indubitably widespread use and frenzy-sparking media and public attention concerning present and future AI.

In brief, just like a sequel to a movie, GPT-4 in some ways is better than ChatGPT, such as being larger, faster, and seemingly more fluent, while in other respects raises additional and pronounced qualms (I’ll be covering those shortly herein). A bit of a muddled reaction. The sequel is not a slam-dunk, which many had anticipated it would be. Turns out that things are more nuanced than that. Seems like that’s the real world we all live in.

Perhaps the CEO of OpenAI, Sam Altman, said it best in his tweets on March 14, 2023, about the GPT-4 launch:

  • “Here is GPT-4, our most capable and aligned model yet. It is available today in our API (with a waitlist) and in ChatGPT+.”
  • “It is still flawed, still limited, and it still seems more impressive on first use than it does after you spend more time with it.”

My suggestions about what you might consider doing as a result of the release of GPT-4, and as dependent upon your existing situation or circumstance, consists of these potential actions:

  • Existing ChatGPT users. If you already are using ChatGPT, you ought to take a close look at GPT-4 to see whether you might want to use it instead (or you might use GPT-4 in addition to using ChatGPT, ergo use either one of ChatGPT or GPT-4, depending upon your needs as they arise). You can play with GPT-4 if you are subscribing to ChatGPT Plus, the $20 per month subscription mode for using ChatGPT, otherwise, you do not particularly have an easy means to access GPT-4 at this time (the caveat or twist being that Microsoft Bing, the search engine, uses a variant of GPT-4, which I’ve discussed at the link here).
  • Never used any generative AI. If you aren’t using ChatGPT and have never used any generative AI, you might want to first start with ChatGPT since it is accessible for free (or, of course, consider using any of the myriads of other generative AI apps to begin your journey into this AI realm). GPT-4 is not free at this time, as mentioned in the points above regarding existing ChatGPT users. Once you’ve got your feet wet with ChatGPT, you can then decide whether it is worth subscribing to ChatGPT Plus to get the additional benefits including having access to GPT-4.
  • Using some other generative AI. If you are using a generative AI app other than ChatGPT, it could be that you might find GPT-4 of keen interest since it has improvements beyond what ChatGPT offers. I mention this because some savvy AI users decided that ChatGPT wasn’t as good for them as other options. I’d recommend getting up-to-speed about GPT-4 to decide whether your existing choice is still the best one for you. It might be. Thus, I am not advocating that for sure you should switch to GPT-4 and only saying that it is always prudent to kick the tires on other available cars.
  • Other software that accesses ChatGPT via the API. For those that make software that connects to ChatGPT via the API (application programming interface), which I’ve discussed at the link here, you would be wise to take a close look at the use of GPT-4 via its API. One big question is the cost of using the GPT-4 API is a lot higher than using ChatGPT. You will want to do a tradeoff analysis of the added benefits of GPT-4 versus the lower-cost alternative of sticking with ChatGPT. This is a somewhat complicated decision. Do so mindfully and not mindlessly.

One thing that seems a shocker to many is that the newsworthiness didn’t quite rise to the level earlier anticipated.

Allow me to explain why.

The Original Blockbuster And Now Its Sequel

You likely know that a generative AI app known as ChatGPT was made available at the end of November of last year.

This was a surprising smash hit.

Up until then, prior efforts to release generative AI applications to the general public were typically met with disdain and outrage. The basis for the concerns was that generative AI can produce outputs that contain all manner of foul outputs, including profane language, unsavory biases, falsehoods, errors, and even made-up facts or so-called AI hallucinations (I don’t like that “hallucinations” terminology since it tends to anthropomorphize AI, see my discussion at the link here).

Generative AI is a type of AI that involves generating outputs from user-entered text prompts, such as being able to produce or generate text-based essays, or produce images or artwork, or produce audio, or produce video, etc. These are usually referred to as text-to-text, text-to-essay, text-to-art, text-to-image, text-to-audio, text-to-video, and the like. The remarkable facet of generative AI is that the generated works are seemingly on par with human-generated outputs. You would have a hard time trying to distinguish a generative AI output from a comparable composition solely produced by the human mind and the human hand.

For more about generative AI, see my ongoing series such as this link here about the fundamentals of ChatGPT and generative AI, along with coverage of ChatGPT by students and the issues of potential cheating on essays (use the link here), the highly questionable use of ChatGPT for mental health advisement (see the link here), concerns over potential plagiarism and copyright infringement of generative AI (the link here), and many more salient topics at the link here.

Part of the reason that ChatGPT did not seem to get the usual whiplash was due to some behind-the-scenes work by the AI maker, OpenAI, before releasing ChatGPT. They tried to use various techniques and technologies to push back at outputting especially hateful and foul essays. Keep in mind that ChatGPT is exclusively a text-to-text or text-to-essay style of generative AI. Thus, the attempts to prevent outlandish and enraging outputs consist of dealing with words. Similar issues arise when the output is art or images, though this can be equally or more so difficult to catch to prevent the production of offensive imagery of one kind or another.

A notable technique that has been increasingly embraced by AI makers all told consists of using RLHF (reinforcement learning via human feedback). Here’s how that generally works. Once a generative AI app has been initially data trained, such as by scanning text across the Internet, human reviewers are utilized to help guide or showcase to the AI what is worthwhile of saying and what is scandalous to say. Based on this series of approvals and disapprovals, the generative AI is roughly able to pattern match what seems okay to emit and what seems to not be allowable.

I’d like to also mention one other extremely important point.

The AI is not sentient.

No matter what the zany headlines declare, be assured that today’s AI is not sentient. For generative AI, the app is an extensive computational pattern-matching software and data modeling apparatus. After examining millions upon millions of words from the Internet, patterns about words and their statistical relationships are derived. A result is an amazing form of mimicry of human language (some AI insiders refer to this as a stochastic parrot, which kind of makes the point, though regrettably brings an otherwise sentient element into the discussion).

You can think of generative AI as the auto-complete function when you are using a word processing package, though this is a much more encompassing and advanced capability. I’m sure you’ve started to write a sentence and have an auto-complete that recommended wording for the remainder of the sentence. With generative AI such as ChatGPT, you enter a prompt and the AI app will attempt to not simply complete your words, but seek to answer questions and compose entire responses.

In addition, a rookie mistake that many make when using ChatGPT or any other similar generative AI app entails failing to use the vaunted interactive conversational capacities. Some people type in a prompt and then wait for an answer. They seem to think that is all there is to it. One and done. But this is missing the crux of generative AI. The more useful approach consists of doing a series of prompts associated with engaging in a dialogue with the generative AI. That’s where generative AI really shines, see my examples at the link here.

ChatGPT was heralded by the media and the public at large as an amazing breakthrough in AI.

The reality is that many other akin AI apps have been devised, often in research labs or think tanks, and in some cases were gingerly made available to the public. As I said above, the outcome was not usually pretty. People prodded and poked at the generative AI and managed to get essays of an atrocious nature, see my coverage at the link here. The AI makers in those cases were usually forced to withdraw the AI from the open marketplace and revert back to focusing on lab use or carefully chosen AI beta testers and developers.

Much of the rest of the AI industry was gobsmacked that ChatGPT managed to walk the tightrope of still producing foul outputs and yet not to the degree that public sentiment forced OpenAI to remove the AI app from overall access.

This was the true shock of ChatGPT.

Most people assumed the shock was the conversant capability. Not for those in AI. The surprise that floored nearly all AI insiders was that you could release generative AI that might spew out hateful speech and the backlash wasn’t fierce enough to force a quick retreat. Who knew? Indeed, prior to the release of ChatGPT, the rumor mill was predicting that within a few days or weeks at the most, OpenAI would regret making the AI app readily available to all comers. They would have to restrict access or possibly walk it home and take a breather.

The incredible success of the ChatGPT rollout has cautiously opened the door to other generative AI apps to also meet the street. For example, I’ve discussed the Google unveiling of Bard and how the Internet search engine wars are heating up due to a desire to plug generative AI into conventional web searching, see the link here.

ChatGPT can reasonably be characterized as a blockbuster. It also is one that came out of nowhere, so to speak. Sometimes a blockbuster movie is known beforehand as likely going to be a blockbuster upon release. In other cases, the film is a sleeper that catches the public by surprise and even the movie maker by surprise. That’s what happened with ChatGPT and OpenAI.

Okay, so we have the blockbuster, ChatGPT.

ChatGPT is essentially based on a version of GPT known as GPT-3.5. Previously, there has been GPT-3, GPT-2, and the like. The AI world and those tangential to AI all knew that OpenAI had been working on the next version, GPT-4.

GPT-4 would be considered the successor or sequel to ChatGPT.

This brings us back to my analogy about movies. ChatGPT, a surprise blockbuster, was huge in popularity. The expectations about what GPT-4 would be and how the public would react were rife with wild speculation. GPT-4 would walk on water! GPT-4 will be faster than a speeding bullet! GPT-4 will be the attainment of sentient AI or Artificial General Intelligence (AGI)!

On and on this has gone.

You might vaguely be aware that the CEO of OpenAI, Sam Altman, said this in an interview posted on YouTube (dated January 17, 2023): “The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from. People are begging to be disappointed and they will be. The hype is just like… We don’t have an actual AGI and that’s sort of what’s expected of us.”

Well, GPT-4 is here.

The movie has come out.

We can see it with our own eyes. No more untamed speculation. Reality has come to roost.

Let’s unpack the shiny new toy.

The Essentials Of GPT-4

You undoubtedly want to know what GPT-4 provides.

In my discussion, I will be referring to various documents and videos that OpenAI has made available about GPT-4, along with making remarks based on my use of GPT-4. For ease of discussion, please know that there are two handy documents that I will be avidly citing, one entitled the OpenAI official GPT-4 Technical Report and the other one is the OpenAI official GPT-4 System Card document (both are available at the OpenAI website). I will cite them by the acronyms of TR for the GPT-4 Technical Report and SC for the GPT-4 System Card.

Let’s start by citing the very first sentence of the abstract for the TR:

  • “We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.”

Believe it or not, there is a lot jampacked into that one sentence.

Get seated and have a comfortable drink in your hand.

One aspect that is a generally accepted rule of thumb about generative AI is that the larger the system, the more likely the fluency and overall capability become. This seems to be relatively well-established by the historically rising sizes of the generative AI systems and their increasingly remarkable fluency in terms of carrying on interactive conversations. Not everyone believes this must be the case, and there are researchers actively seeking smaller-sized setups that use various optimizations to potentially achieve as much as their larger brethren.

In the above-quoted sentence about GPT-4 from the TR, you might have observed the phrasing that it is a “large-scale” generative AI. Everyone would likely tend to vicariously agree, based on the relative sizes of generative AI systems of today.

The obvious question on the minds of AI insiders is how large is large-scale when it comes to GPT-4?

Usually, the AI maker proudly declares various sizing metrics of their generative AI. You might do so to inform the rest of the AI world about how size and scale matter. You might do so to brag. You might do so simply because it is like a car, wherein a natural curiosity is how big an engine is there and how fast will it go.

According to the TR, here’s what is indicated:

  • “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”

AI insiders tend to find this beguiling. On the one hand, it seems to be a disturbing break with diplomacy to not tell about these crucial characteristics. That being said, the logic that doing so might reveal proprietary secrets or possibly could open the door to cybersecurity breeches, well, that does seem to make sense too.

Should AI makers be compelled to reveal particular characteristics about their generative AI, doing so to the degree and manner that will not inadvertently give away any vital telltale clues?

I will let you put on your AI Ethics hat to ponder this consideration.

Some believe that we might also end up establishing new AI Laws that would require explicit disclosures.

The thinking is that the public ought to know what is going on with AI, especially when AI gets bigger and has presumably the potential for eventually veering into the dire zone of existential risks, see my analysis at the link here.

Moving on, we do also not know what data was used to train GPT-4.

The data makes or breaks the advent of generative AI. Some people falsely assume that the entirety of the Internet was scanned to devise these generative AI capabilities. Nope. In fact, as I discuss at the link here, only a teensy tiny portion of the Internet is being scanned.

A related aspect is whether the generative AI is in real-time scanning the Internet and adjusting on-the-fly the computational pattern-matching. ChatGPT was limited to scans that took place no later than the year 2021. This means that when you use ChatGPT, there is pretty much no data about what happened in 2022 and 2023.

Rumors were that GPT-4 would contain an up-to-date and real-time connection to the Internet for on-the-fly adjustment.

Here’s what the TR says:

  • “GPT-4 generally lacks knowledge of events that have occurred after the vast majority of its pre-training data cuts off in September 2021 and does not learn from its experience. It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains, or be overly gullible in accepting obviously false statements from a user.”

You can perhaps then see why some are a bit disappointed in GPT-4. The rumors suggested it would be operating in real-time while simultaneously adjusting on-the-fly to the Internet. A considered big improvement over ChatGPT. The reality is that GPT-4 is still dealing with dated data. And there isn’t a real-time adjustment taking place to the computational pattern-matching per se based on refreshes from the Internet.

I have more news for you.

The sentence that I earlier cited about GPT-4 as being large-scale also said that GPT-4 is multi-modal.

Allow me to give some background on the notion of multi-modal generative AI.

I mentioned toward the start of this discussion that there are different types of generative AI, such as text-to-text or text-to-essay, text-to-art or text-to-image, text-to-audio, text-to-video, etc. Those are all considered to be a singular mode of handling the content. For example, you might input some text and get a generated essay. Another example would be that you enter text and get a generated artwork.

At the end of last year, I made my annual predictions about what we would see in AI advances for the year 2023 (see the link here). I had stated that multi-modal generative AI was going to be hot. The idea is that you could for example enter text and an image (two modes on input), using those as the prompt into generative AI, and you might get an essay as output along with a generated video and an audio track (three modes on output).

Thus, a multitude of modes might co-exist. You might have a multitude of modes at the prompting or input. You might also have a multitude of modes at the generated response or output. You could have a mix-and-match at both inputs and outputs. That is where things are heading. Exciting and the possibilities of what can be done with generative AI are opened immensely because of the multi-modal functionality.

ChatGPT has just a singular mode. You input text, you get some generated text as output.

Rumors were that GPT-4 would break the sound barrier, as it were, and provide a full multi-modal capability of everything to everything. Everyone knew that text would be included. The anticipation was that images or artwork would be added, along with audio, and possibly even video. It would be a free-for-all. Any mode on input, including as many of those modes as you desired. Plus any mode on output, including as many of the modes mixed as you might wish to have.

A veritable smorgasbord of modes.

What does GPT-4 provide?

Go back to that sentence from the TR:

  • “We report the development of GPT-4, a large-scale, multimodal model which can accept image and text inputs and produce text outputs.”

You can enter text and you will get outputted text, plus you can possibly enter an image at the input.

Demonstrations showcasing the image or vision processing of inputted images have indicated that the items in a picture for example could be identified by the generative AI and then composed into a written narrative explaining the picture. You can ask the generative AI to explain what the picture seems to depict. All in all, the vision processing will be a notable addition.

The vision processing or image analysis capability is not yet available for public use (per the OpenAI website blog):

  • “To prepare the image input capability for wider availability, we’re collaborating closely with a single partner to start.”

The gist of all of this is that it is heartwarming to realize that GPT-4 apparently does have the capability to do image input and analysis. Many are eagerly awaiting the public release of this feature. Kudos to OpenAI for nudging into the multi-modal arena.

So, we have text as input, plus image as input (when made available for public use), and text as output.

Some though have been handwringing in the AI community that this barely abides by the notion of multi-modal. Yes, there is one more mode, the image as input. But not an image as output. There seemingly isn’t audio as input, nor audio as output. There seemingly isn’t video as input, nor video as output. Those with a smarmy bent find this to be “multi-modal” in the most minimalist of ways.

The counterargument is that you have to crawl before you walk, and walk before you run.

I believe that covers the first sentence of the TR and we can shift to additional topics.

More Essentials Of GPT-4

I am going to speed up now that you have some added background overall on this matter.

Here’s something significant as noted in the OpenAI blog posting about GPT-4:

  • “Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload.”

Two quick points about this.

First, the indication that they rebuilt their entire deep learning stack is certainly a noteworthy remark and accomplishment (it means that they redid the computational pattern matching models and opted to restructure how things work under the hood). Good for them. The begging question that some express is that it sure would be nice to know exactly what they did in this rebuild. The TR and SC somewhat mention what took place, but not to any in-depth degree.

Of course, you could persuasively argue that they ought to not reveal their secret sauce. They are under no requirement to do so. Why provide aid to their competitors unnecessarily? The other side of the coin argues that for the betterment of AI and society all told, it would presumably aid in advancing generative AI, which seemingly is going to be good for humankind (one hopes).

We are back to that squishy AI Ethics and AI Law dividing line.

Second, the quoted remark indicates that they designed a supercomputer from the ground up. Besides the interest in what this supercomputer does and how it exactly works, some of which have been explained, this brings up an entirely different matter.

Some worry that generative AI is becoming a big money game. Only the tech companies with the biggest bucks and the biggest resources will be able to devise and field generative AI. The reason that this is questioned is that perhaps we are going to have generative AI that is tightly controlled by only a handful of tech firms. We might become heavily dependent upon those firms and their wares.

Do we potentially need to use existing laws or devise new AI laws to prevent a concentration of generative AI being in the narrow command of just a few?

Something to ruminate on.

If you are waiting for the shoe to drop in terms of some incredibly massive difference between ChatGPT and GPT-4, take a gander at this from the OpenAI blog posting about GPT-4:

  • “In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.”

I’ve found this lack of distinctive difference to be somewhat the case, namely that if you are doing everyday idle kind of chitchat with ChatGPT and doing likewise with GPT-4, you might not particularly realize that GPT-4 is considered more powerful overall.

One aspect that does seem to be a standout consists of establishing context for your conversations with the two generative AI apps.

Here’s what I mean.

When you use a generative AI app, you at times just leap into a conversation that you start and continue along with the AI. In other cases, you begin by telling the AI the context of the conversation. For example, I might start by telling the generative AI that I want to discuss car engines with the AI, and that I want the AI to pretend it is a car mechanic. This then sets the stage or setting for the AI to respond accordingly.

Many people that use ChatGPT do not realize the importance of setting the context when they first engage in a dialogue with the AI app. It can be a huge difference in terms of what response you will get. I often find that ChatGPT doesn’t hone very well on its own toward particular contexts. It tries but often falls short. So far, GPT-4 seems to really shine through the use of contextual establishment.

If you are going to use generative AI and want to establish contexts when you do so, I would definitely give the overall edge to GPT-4 over ChatGPT.

On a related element, there is also an aspect known as steerability that comes into play.

Some users of ChatGPT have been surprised to sometimes have the AI app provide responses that seem perhaps overly humorous or overly terse. This can occur if the generative AI detects something in your input prompt that appears to trigger that kind of response. You might jokingly ask about something and not realize that this is going to then steer ChatGPT toward jokes and a lighthearted tone.

Per the OpenAI blog posting about GPT-4 and steerability:

  • “Rather than the classic ChatGPT personality with a fixed verbosity, tone, and style, developers (and soon ChatGPT users) can now prescribe their AI’s style and task by describing those directions in the ‘system’ message. System messages allow API users to significantly customize their users’ experience within bounds.”

Again, this will enhance the user experience with the generative AI apps. Other generative AI makers are doing likewise and we will inevitably have nearly all such AI apps with some form of steerability and contextual establishment functionality.

The Rough Road Still Ahead

An ongoing and troubling problem underpinning generative AI, in general, is that all manner of unpleasant and outright disturbing outputs can be produced.

In my column postings, I’ve covered these various and sobering concerns:

  • Generative AI Produced Errors
  • Generative AI Produced Falsehoods
  • Generative AI Embedded Biases
  • AI Hallucinations
  • Privacy Intrusions
  • Data Confidentiality Weaknesses
  • Disinformation Spreader
  • Misinformation Propagator
  • Dual-Use For Weaponry
  • Overreliance By Humans
  • Economic Impacts On Humans
  • Cybercrime Bolstering
  • Etc.

Some rumors were that magically and miraculously GPT-4 was going to clean up and resolve all of those generative AI maladies.

Nobody with a proper head on their shoulders thought that such a rumor could hold water. These are very hard AI problems. They are not readily solved. There is much yet to be done to contend with these enduring and exasperating difficulties. It is likely going to take a village to conquer the litany of AI Ethics issues enmeshed within the milieu of generative AI.

To give credit where credit is due, OpenAI has sought to explain how they are addressing these many varied challenges. Those of you that are interested in AI Ethics should consider doing a close reading of the TR and the SC.

Here for example are some plain-spoken comments about GPT-4 as stated by OpenAI in the TR:

  • “GPT-4 can generate potentially harmful content, such as advice on planning attacks or hate speech. It can represent various societal biases and worldviews that may not be representative of the users intent, or of widely shared values. It can also generate code that is compromised or vulnerable. The additional capabilities of GPT-4 also lead to new risk surfaces.”

Furthermore, they say this in the TR:

  • “Through this analysis, we find that GPT-4 has the potential to be used to attempt to identify private individuals when augmented with outside data. We also find that, although GPT-4’s cybersecurity capabilities are not vastly superior to previous generations of LLMs, it does continue the trend of potentially lowering the cost of certain steps of a successful cyberattack, such as through social engineering or by enhancing existing security tools. Without safety mitigations, GPT-4 is also able to give more detailed guidance on how to conduct harmful or illegal activities.”

I don’t have the column space here to cover all of the numerous items associated with these difficulties. Be on the look for additional column coverage in my ongoing analysis of generative AI from an AI Ethics and AI Law perspective.

It would seem worthwhile to take a moment and acknowledge that OpenAI has made available their identification of how they are approaching these arduous challenges. You could say that there was no reason for them to have to do so. They could just act like there is nothing there to see. Or they could just do some vague hand-waving and assert that they were doing a lot of clever stuff to deal with these issues.

Fortunately, they have chosen the sensible approach of trying to get out there ahead of the backlashes and browbeating that usually goes with generative AI releases. They presumably are aiming to firmly showcase their seriousness and commitment to rooting out these issues and seeking to mitigate or resolve them.

I would offer the additional thought that the field of AI all told is going to take a harsh beating if there isn’t an ongoing and strenuous effort to pursue these matters in a forthright and forthcoming manner. Taking a hidden black-box approach is bound to rise ire amid the public at large. You can also anticipate that if AI firms don’t try to deal with these problems, the odds are that lawmakers and regulators are going to be drawn into these matters and a tsunami of new AI laws will pepper all the AI makers and those that field AI.

Some believe we are already at that juncture.

They insist that though many of the AI makers seem to be sharing what they are doing, this is somewhat of a sneaky form of plausible deniability. In short, go ahead and put out AI that is appalling and patently wrongful, rather than waiting until things are better devised, and stave off those in AI Ethics and AI Law by proclaiming that you are doing everything possible to rectify things. I’ve discussed this “wait until readied” ongoing controversy frequently in my column coverage.

Per the TR:

  • “OpenAI has been iterating on GPT-4 and our deployment plan since early August to prepare for a safer launch. We believe this has reduced the risk surface, though has not completely eliminated it. Today’s deployment represents a balance between minimizing risk from deployment, enabling positive use cases, and learning from deployment.”

Returning back to the matter at hand, I earlier mentioned that AI hallucinations are a prevailing problem when it comes to generative AI.

Again, I don’t like the catchphrase, but it seems to have caught on. The mainstay of the issue with AI hallucinations is that they can produce outputs that contain very crazy stuff. You might be thinking that it is up to the user to discern whether the outputs are right or wrong. A concern here is that the outputs might contain made-up stuff that the user has no easy means of determining is made-up. They might believe the whole hog of whatever the output says.

There is also a subtle tendency to get lulled into believing the outputs of generative AI. Usually, the output is written in a tone and manner that suggests a surefire semblance of confidence. Assuming that you use generative AI regularly, it is easy to get lulled into seeing truthful material much of the time. You then can get readily fooled when something made-up gets plucked into the middle of what otherwise seems to be an entirely sensible and fact-filled generated essay.

Here’s what the TR says about GPT-4:

  • “GPT-4 has the tendency to ‘hallucinate,’ i.e. ‘produce content that is nonsensical or untruthful in relation to certain sources.’ This tendency can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users. Counterintuitively, hallucinations can become more dangerous as models become more truthful, as users build trust in the model when it provides truthful information in areas where they have some familiarity.”

The good news is that efforts have been made and seem to be ongoing to try and reduce the chances of AI hallucinations in GPT-4. Also, the claim is made that GPT-4 outdoes GPT-3.5 in terms of averting AI hallucinations, even though it makes clear that they still are going to occur.

Here’s the TR on this:

  • “On internal evaluations, GPT-4-launch scores 19 percentage points higher than our latest GPT-3.5 model at avoiding open-domain hallucinations, and 29 percentage points higher at avoiding closed-domain hallucinations.”

To close off this portion of the discussion for now, generative AI by all AI makers is confronting these issues. No one has somehow cured this. If you are looking for hard AI problems, I urge you to jump into these waters and help out. There is plenty of work to be done.

Conclusion

When a blockbuster movie has been around for a while and gone from the theatres to home streaming, quite a lot of people have likely seen the movie or know something about it from others that have seen it. Thereafter, when a sequel is announced and being filmed, the anticipation can reach astronomical levels.

J.J. Abrams, the now legendary filmmaker for parts of the Star Wars series and the reboot of Star Trek, said this about sequels: “There’s nothing wrong with doing sequels, they’re just easier to sell.”

Edwin Catmull, co-founder of Pixar emphasized this about sequels: “Believe me, sequels are just as hard to make as original films.”

If you are interested in seeing the blockbuster ChatGPT, you can sign-up readily. The sequel GPT-4 is a bit trickier to get access to. Do also realize that there are a lot of other movies available, well, other generative AI apps available, so you might want to make sure that your filmgoing (aka generative AI) experience is varied and fulfilling.

One final sobering note. Be forewarned that the content you might encounter could be PG13, R, or even NC-17. Keep that in mind.

Source: https://www.forbes.com/sites/lanceeliot/2023/03/15/what-you-need-to-know-about-gpt-4-the-just-released-successor-to-generative-ai-chatgpt-plus-ai-ethics-and-ai-law-considerations/