AI Battle Royale Erupts With Google Bard Versus Microsoft OpenAI ChatGPT, Stoking AI Ethics And AI Law Concerns

Get your helmet on and be ready for the fallout from an emerging battle royale in AI.

Here’s the deal.

In one corner stands Microsoft with their business partner OpenAI and ChatGPT.

Leering anxiously in the other corner is Google, which has announced that they will be making available a similar type of AI, based on their long-standing insider AI app known as Lambda. Lambda sounds kind of techie, which is a stark contrast to “ChatGPT” (seems kind of light and airy). Google, perhaps realizing that a name embellishment was needed, has opted to put forth its variant of Lambda and anointed it with a new name “Bard”.

I’ll say more about Bard in a moment, hang in there.

We are on the cusp of ChatGPT going toe-to-toe in the marketplace with Bard. These are heavyweights, make no bones about that. These are hard hitters. They have tons of dough and legions of resources.

Into all of this comes a slew of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few. I’ll be interleaving the AI Ethics and AI Law considerations into this discussion at large.

Let’s get back to the brewing battle and how it all came to be.

The Generative AI Bonanza

First, you likely know that ChatGPT has been dominating the AI sphere for the last several months.

Everyone seems to know something or another about ChatGPT. The generative AI app was released by the AI maker OpenAI in November. ChatGPT and OpenAI became the darling of public attention. Via generative AI, you can enter a text prompt and have the AI produce a stellar essay for you. This text-to-text capability is so good that you would be hard-pressed to realize that the outputted essay is devised by AI. Furthermore, the essay is essentially an original, such that it wasn’t copied word-for-word from an existing source. Using probabilistic pattern matching, the AI is able to craft essays that for all intents and purposes seem to be unique.

OpenAI has had an ongoing business relationship with Microsoft. Upon the skyrocketing fame of ChatGPT, turns out that Microsoft opted to lean further into the arrangement with OpenAI. This made abundant sense. Getting onto the public bandwagon that favors ChatGPT is undoubtedly a smart move. Though you might liken this to the tail wagging the dog, the gist is that Microsoft can spruce up its image and garner renewed attention by grabbing onto the tiger that is OpenAI and ChatGPT.

Of the ways that Microsoft and OpenAI ChatGPT are getting hitched together, perhaps the most astounding and maybe unnerving will be the integration of ChatGPT into the Bing search engine.

Why is that important?

Because you have to follow the money, per that legendary sage bit of wisdom.

According to various published stats, Bing search gets maybe around 8% to 9% of the prevailing Internet search activity, while Google gets around 85%. Let’s not quibble about whether those stats are off by a few points in either direction. The essence is that Google is the 600-pound gorilla, while Bing is not. Also, keep in mind that search engines derive money by eyeballs. The more eyeballs, the more money goes to the provider of the search engine. Google makes bucko bucks from search. Microsoft dreamily wishes it could do the same.

Microsoft has over the years tried to toss everything but the kitchen sink at Bing to get more usage. Now, with the relationship between OpenAI and ChatGPT, the kitchen sink is finally coming into the picture. By integrating ChatGPT with Bing, the obvious assumption is that people will flock to Bing. Surely, the kitchen sink will do the trick.

Think of it this way.

ChatGPT is the AI darling of our times. Right now, you have to sign-up to use ChatGPT, for which maybe you are able to do so and maybe not (volume has been at times capped by OpenAI). Imagine that ChatGPT was available non-stop and without any login necessary, simply by visiting the Bing search engine.

Voila, the world suddenly starts spinning in the direction of Bing. Microsoft will have gotten people to use Bing, albeit by dangling a tantalizing lure, but it doesn’t matter how they garner those eyeballs. To the winner goes the spoils. All those looks coming to use ChatGPT will be using the Bing search engine.

Don’t though presume that this is merely a ChatGPT portal allied with Bing. From a recently posted sneak peek, it appears that the generative AI app is interconnected with Bing. You seem to be able to enter your prompt as to what you want to find out about. Based on the generative AI assessing the prompt, you will get search results, along with a summary. In addition, apparently, there will be highlighted portions of the search results to indicate where it was that something appeared to be relevant to your search query.

Ratcheting this up, the generative AI as used in a search context can interact directly with you, thus the search engine will aid you in refining your search. This is considered a form of interactive conversational AI. From the looks of things, you can toggle between using generative AI to do searches or instead just using generative AI for one-on-one chatting. Presumably, you could ask the generative AI to produce a recipe for a delicious meal, and have it compose the recipe without necessarily going out to do a search across the Internet. On the other hand, you might tell the generative AI to find the best recipes, and then from those go ahead and compose a unique recipe just for you.

A quick clarification before we proceed further.

When I refer to ChatGPT in the Bing search integration discussion above, please do know that it is likely to not be ChatGPT but instead its more advanced cousin known as GPT-4. ChatGPT has gotten all the fame. OpenAI also has GPT-3, and GPT-3.5 (upon which ChatGPT is based), and their latest generative AI is GPT-4. AI insiders will cringe that people will assume that Bing is using ChatGPT, when in fact it probably will be using GPT-4, but to those outside of the AI realm, this is a distinction without a difference. One supposes that the phasing will be something along the line of this generative AI brought to you by the makers of ChatGPT. That’s probably sufficient for most people.

The odds are that ChatGPT will still be made available on a standalone basis, and perhaps available too via an API (application programming interface). The use of an API allows other programs to access the generative AI app. As such, and as I’ve predicted, we are going to see a lot of non-AI apps that will end up integrating their app into ChatGPT via the use of the API, see my analysis at the link here.

Assuming that GPT-4 becomes available publicly via Bing and also allows for API connections, the question will be whether people will continue to use ChatGPT or gradually shift over to using GPT-4. At this time, this seems nearly unimaginable since ChatGPT is the cat’s meow. The thing is, given that GPT-4 is likely to be faster, better, and otherwise eclipse ChatGPT, most would be wise to connect with GPT-4 over ChatGPT unless there was some determined basis to stick with ChatGPT. Again, as mentioned earlier, you can likely get the afterglow of ChatGPT by using GPT-4 and stating that you are using the cousin of ChatGPT, see my discussion at the link here on this.

Some of the carping about ChatGPT today is that:

  • Availability Woes. Not readily fully available to the general public due to caps set on the number of logins and accounts allowed
  • Overloaded. Tends to get overloaded and won’t let you log in or gets really slow
  • Lacks Surefire Cited Sources. Doesn’t readily provide cited sources as to what underlies the produced essays
  • Not Internet Connected For Sourcing New Material. Exists on a standalone basis and doesn’t connect with the Internet on a real-time basis to do source material look-ups
  • Frozen To 2021. Was set up with data from the Internet as of 2021 and was essentially frozen at that juncture
  • Can Generate Falsehoods. Produces essays that can contain factual errors and makes-up stuff (some refer to this as “AI hallucinations” which is a term that I don’t like, for the reasons stated at the link here)
  • Other

As per what has been suggested about GPT-4 so far (we’ll have to see the proof upon release):

  • Availability Issue Overcome. Via Bing, there presumably won’t be a login required and thus the generative AI will be fully available to the general public
  • Overloading Issue Overcome. One would hope and assume that the Bing search engine hardware resources will be ready for and beefed up to handle the computer workload, ergo averting the existent ChatGPT sluggishness and lockouts
  • Cited Sources Issue Overcome. It appears via the sneak peek that cited reference sources will be shown, as a result of the Bing search engine integration, allowing users to generally ascertain how the generative AI concocted its essays
  • Internet Connectivity Provided. The generative AI will be purposefully connected to the Internet, doing so to foster the search engine and working in unison
  • Time Freezing Overcome. The generative AI will seem to have access to whatever are the latest real-time postings on the Internet, with no more time freeze
  • But Can Still Generate Falsehoods. Regardless of how much they might try, the odds are that the latest generative AI is still going to generate falsehoods, I’ll explain why herein and also indicate the looming nightmare this might cause.
  • Other

All in all, I trust that you can see why there is going to be a rush of people shifting from using ChatGPT to using GPT-4, though they might not realize that they are making the switch per se. They will simply be lured to the Bing search engine because it has a “better generative AI” and otherwise they might not have a clue of what is under the hood. And, they might assume that it is ChatGPT since there are likely to be indications that the Bing search engine is using a ChatGPT cousin.

Most people just want a better mousetrap.

The Search For Winning Search

Speaking of mousetraps, we now turn back to the starting point of this discussion.

If Bing takes gobs and gobs of eyeballs from Google searches due to adding generative AI (the veritable mousetrap), this is a bad time for Google. They need their cash cow. Strategically, they have to protect their turf.

Time to fight fire with fire.

Bard is that bolt of lightning that they hope will keep attention on Google and especially Google search. Of course, people are avidly used to using Google search. If a competing search engine, in this case, Bing, can do a better job by integrating generative AI, the chances are that people will switch over from Google search.

You might say that there isn’t much stickiness or loyalty to search engines, other than by momentum and comfort (people have fallen into a routine of using Google search, and it seems quite reliable and easy to use). Whether people do so because they believe that the search itself is better, or due to other functionalities, remains in heated debate.

The point is that if Bing search is at least on par with Google search, all else being equal, and if the hottest thing in AI is also available at Bing, what will people do?

Your answer choices are:

  • a) Almost no one will switch from Google to Bing, they will ignore or be uncaring about the added generative AI in Bing
  • b) Some people will make the switch, doing so temporarily to see what the fuss is about, and then return to using Google search
  • c) A lot of people will make the switch, lured by the generative AI available at Bing, and some of those people will permanently henceforth use Bing over using Google search
  • d) Tons of people will make the switch and never go back to Google search since they will get comfortable with Bing and not feel a need to revert to their prior habits

The contender answer “d” above is what must assuredly keep Google executives up at night. It is a nightmarish scenario for them. One would wake up in a cold sweat at the prospect of having your most precious of capabilities summarily get disrupted.

There is an irony to this potential disruption.

Follow me closely on this tale of joy and woe. By and large, Google has been and continues to be a top-notch leader in AI. Despite all of the fervor over OpenAI and ChatGPT, you need to realize that Google’s AI is incredibly amazing and customarily pioneering.

Some have incorrectly said that Google was asleep at the wheel and allowed its AI prowess to decay, thereby sleepily seemingly allowing OpenAI to take the top spot. This is a ditzy characterization of what has taken place. Anyone that spouts such gibberish is not paying attention to the AI world.

Let’s right this ship.

I mentioned at the beginning of this discussion that Bard is going to be based on a specialized or some say limited variant of Lambda. Lambda is a generative AI app that has led the way in many important AI advances. You can reasonably declare that Lambda and the OpenAI GPT line are head-to-head competitors.

In that case, you might be puzzled why it is that OpenAI and ChatGPT stole the show.

As I’ve covered previously, see the link here, the release of ChatGPT was done in a manner that took the AI world by surprise, such that few if any anticipated the colossal effusive euphoric reaction that ensued. It has been a public relations and marketing bonanza.

Here’s what usually happens when generative AI has been released.

Almost immediately, people doggedly try to see if they can make the AI generate foul words and foul essays. For my explanation of why and how this takes place, see the link here. The news media then loves to proclaim that AI is toxic. A storm brews for the AI maker and they find themselves under intense scrutiny. Pressures mount. About the only solution that works expeditiously is to rapidly withdraw the AI from public access.

I mention this because the generative AI exhibited by ChatGPT has been available in other comparable AI apps for AI researchers for quite a while. The release of ChatGPT was not anything world-shattering for those really into AI. The brazen move of making the generative AI available to the public at large caused AI insiders to raise eyebrows. Surely, this was a mistaken move, and the world would teach them a harsh and bitter lesson. One would think that they could have seen how battered other generative AI releases had gone and learned a lesson from afar.

Well, darned if the world seemed to accept the (at times) foul outputs of ChatGPT.

As I’ve elaborated at this link here, OpenAI did to their credit undertake a lot of crucial protective steps before letting ChatGPT into the wild. They used what is known as RLHF (reinforcement learning via human feedback) to try and get the AI to ascertain what is foul versus what is not. There was also the use of adversarial AI techniques, whereby you pit one AI that is trying to get the other AI to spew forth foulness. You keep running that until the targeted AI is able to essentially outdo the adversarial AI and keep itself from emitting foulness (this is not going to be perfect).

So, just to be clear, you can still get ChatGPT to produce foulness, though you have to usually try to get this to happen. This has not seemed to tarnish ChatGPT at all. OpenAI broke the curse. You can release to the public a generative AI app that generates some amount of foulness, and people will go along with this. They seem to accept that if you want a shiny new toy, it is going to have rough edges.

Going back to Lambda, you might have somewhat heard about Lambda last year when a Google engineer declared that Lambda was sentient. This made the news. A lot. In my column, I dispelled the notion that Lambda was sentient and indeed we don’t have any AI of that caliber at this time, see my coverage at the link here.

That brings up another potential public concern. If an AI maker releases a generative AI app, one qualm is that it might produce foulness. Another concern is that people might falsely decree that the AI is sentient. This is a bad look for any AI maker.

For those reasons plus others, Google presumably has been taking the cautious approach of not releasing their generative AI on a public widespread basis. If you keep such AI to the attention of AI researchers, they all know and understand what kinds of limitations exist. They are less likely to go around exhorting that AI is taking over or that AI is toxic (this still does occur, see my column coverage for numerous instances).

Add to this equation the preciousness of the Google search engine.

It is one thing to release an AI app and have people get upset if it is doing sour things. If you connect such an AI app to your prized possession, the chances are that the fallout over the AI app will clobber your priceless gem. Google has been in an awkward spot. They risk undercutting the respect that the public has for their Google search engine if they were to tie a generative AI to it and have the AI do foul things.

They have had a lot to lose, with not much seeming to gain.

Microsoft would seem to be willing to take a leap at what they can potentially gain. This is especially perceived as less risky now that we’ve all seen the abundance of acceptance for ChatGPT. Before the release of ChatGPT, it would have been nearly impractical to go around proffering that you are going to connect generative AI with your search engine. Only those that wanted to take a risky moonshot would have done so.

Public acceptance of ChatGPT has changed the dynamics.

Generative AI is now in the Goldilocks mode. If it produces not too much foulness, you are okay to release it. The porridge can’t be too cold or too hot. It has to be just right.

There is though a specter casting a shadow over both Google Bard and Microsoft with OpenAI ChatGPT.

What might that be?

The everpresent and ongoing problem is that generative AI can produce all manner of factual errors and made-up “facts” that appear to be realistic and true.

A tremendous amount of AI research is pursuing this thorny problem. The goal is to ensure that the essays produced by generative AI contain only factual facts, see my analysis at the link here. People rightfully are upset when they discover that an essay produced by an AI app contains falsehoods. Sure, the usual warning is that you, the user, have to be diligent and double-check the AI-generated essay. You use the essay out of the box at your own risk.

People don’t like that.

Having to fish around in an essay to verify all the facts is time-consuming and irksome. Some of it might admittedly be obvious, such as a generated essay about Abraham Lincoln that says he used to fly his jet airplane around the country. But what if the essay gave a slightly wrong date for when he became president? Would you readily detect this? Probably not, unless you are a history buff.

Teetering To Win But Precariously

We now teeter on a delicate precipice.

Assume you have the free will to choose whichever search engine you desire to use.

One search engine has generative AI. This is handy. The generative AI though can generate falsehoods. This is inarguably undesirable. The search engine shows you the sources used. In theory, you can try to dig into those and see if perchance the generated falsehood came from one of those sources. You then need to decide whether the source is valid or not. And so on.

Maybe this is exciting at first, and then you grow weary of it.

Do you stay with the search engine that has the generative AI, or do you decide to use some other search engine?

One supposes that if the generative AI can be selectively used, you might stay with the search engine that has this functionality. Sometimes you use generative AI, doing so just by itself. Other times you use it in direct conjunction with the search engine. And, in other instances, you use solely the search engine portion alone.

Here are the approaches you might take:

  • Generative AI-Only. Use the generative AI that is adjoining the search engine but use the AI just for one-to-one chatting (don’t use the search functionality)
  • Generative AI With Search. Use the generative AI to aid in your search and see what output the AI provides
  • Search Without Generative AI. Not invoke the generative AI and proceed to use the search engine in a classic mode

This almost seems to be the best of all worlds.

A downside though is that maybe the generative AI-produced falsehoods get your dander up. You are quite upset. You decide that you will no longer use that search engine. Yes, this might seem odd because you are somewhat tossing out the baby with the bath water (an old expression, probably needing retirement), but that’s how you feel.

Strategically, a search engine provider is adding functionality to their search engine that can dramatically bolster usage and make the world want to use your cup of tea. At the same time, you are risking that people will be outraged at seeing AI-generated falsehoods, even if they are forewarned, and even if they can pursue the cited sources to figure out why the falsehoods likely were imbued.

Is this a suitable risk for Microsoft?

We will soon see.

In terms of Bard, put on your leadership executive thinking cap.

Would you bring out Bard as a standalone, test the waters of the public reaction, and only then consider adding the generative AI to your treasured search engine?

This seems prudently cautious.

It tells the world that Google does indeed have this kind of AI. To some degree, perhaps this takes the wind a bit out of the sails of the Microsoft and OpenAI ChatGPT juggernaut. Some would say it is nothing more than a wisp of wind. The sails are up and this bustling sailboat is soaring through the water. Others might claim that this is the early sign of a hurricane coming further down the pike. Enjoy your smooth waters while you can.

Time will tell.

As an aside, one assumes that the name Bard is perhaps a nod of the head as a cutesy reference to Shakespeare (known as The Bard due to his title of the Bard of Avon). You can already anticipate that social media will take this naming and cynically distort it. For example, imagine that you prod the generative AI to produce some foul essay, and then pointedly declare that this is something Shakespeare would never say, poetically or otherwise. Names of products often go both ways when it comes to these matters.

Anyway, we were considering the standalone method of introducing Bard into the public sphere.

Another approach would be to toss caution to the wind and immediately place Bard into the Google search engine.

Why not make such a radical move, some have wondered?

You see, if Microsoft seems to be willing to do so, perhaps the required competitive move is to do likewise. Whoa, remember though that ChatGPT has had its testing period. All signals say green light ahead. Though, I’ve also noted that the proof of the pudding will be once the ChatGPT or GPT-4 is immersed into the search engine. Until then, and only upon public use, can we say for sure what reaction the world will have. Plus, as earlier indicated, the question is whether the 600-pound gorilla has to make any sudden moves at all. Maybe watching warily others around you to let them reveal what is feasible is the astute stance for now.

In short, maybe the best bet is to test the waters with Bard, watch and see what happens with generative AI enmeshed into a competing search engine, and if the eyeballs start shifting over, take your shot and accept the risks based on the reaction to your competitor’s move.

Conclusion

This is going to be a bloody fight.

First, the combatants might beat themselves up:

  • Microsoft Gets Dinged. It could be that by incorporating generative AI into Bing, there turn out to be people that will become vociferously upset about seeing the outputs having from time to time abject foulness and/or falsehoods, for which social media badmouthing causes the mechanization to take a beating. Could this oddly somehow worsen their search engine standing? Could the anticipated uplift be offset by such reactions?
  • Google Gets Dinged. It could be that Bard will be made available on let’s say a standalone basis at first, and some people decide to go at it, pushing mightily to get the generative AI to emit foulness and falsehoods. They use this to diss Bard, even if perhaps this is merely on par with what other generative AI is doing. Some will then act as though they knew this would happen all along and fervently criticize Google for having let the generative AI out of the bottle.

The world can be unfair in that way.

Second, the combatants beat each other up.

You can assume that they are each eyeing the other. When one makes a move, the other will try to make a countermove. All of these chess moves could get either or both of them into some rather indelicate spots. The fast pace on this three-dimensional chessboard can produce all manner of losses along the way.

Third, the outside world takes them to the woodshed.

One viewpoint would be that from an AI Ethics perspective, perhaps it is premature to be making generative AI so widely available. More checks and balances need to be devised before the general public gets access. There is already a societal outcry that generative AI is going to be used by students to cheat when writing essays, see my explanation at the link here. Some would argue that society should be readied before the generative AI tsunami gets beyond control (well, it probably already is).

Lawmakers are likely going to be drawn into this fracas.

Perhaps we need new AI-related laws that would provide legal remedies for generative AI that produces untoward outputs. The public at large might need regulatory stipulations to ensure their safety and have the weight of the government to prod AI makers into taking greater efforts of AI accountability. This also raises the bubbling concern that perhaps generative AI is unfairly usurping the Intellectual Property (IP) rights of content already on the Internet. The AI was data trained via examining content that was often copyrighted or licensed, yet this was done without the IP owner’s awareness or consent. Lots of legal wrangling is coming for generative AI.

One thing for sure is that generative AI has gotten AI into the minds and souls of society in a manner that heretofore was not quite so frenzied and fervent.

A final comment for now.

I’ve repeatedly stated in my column that we would ultimately need to find ways to monetize generative AI. One potentially viable method consists of pairing generative AI with a search engine. The search engine is making the money and the generative AI is getting people to come to use the search engine. The monetization issue is solved, assuming it works without hitches and assuming it doesn’t cause an adverse public response.

We live in exciting times.

Indeed, Charles Dickens said it best: “It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair” (from “A Tale of Two Cities”).

Then again, I found that quote via a search engine and maybe there are some falsehoods or AI hallucinations embedded within it. Better get out my paper copy of the book and do a quick double-check.

Double-checking is going to be in, you’ll see.

Source: https://www.forbes.com/sites/lanceeliot/2023/02/06/ai-battle-royale-erupts-with-google-bard-versus-microsoft-openai-chatgpt-stoking-ai-ethics-and-ai-law-concerns/