OpenAI CEO Suggests That ChatGPT And Generative AI Have Hit The Wall And Getting Bigger Won’t Be The Way Up, Raising Eyebrows By AI Ethics And AI Law

I’ve got two questions for you that you’ve undoubtedly generically heard of before.

Prepare yourself mentally.

First, have we hit the wall?

Second, does size matter?

Both of those questions have deeply entered into the behind-the-scenes news about the latest in generative AI.

Generative AI is the type of Artificial Intelligence (AI) that can generate various outputs by the entry of text prompts. You’ve likely used or known about ChatGPT by AI maker OpenAI which allows you to enter a text prompt and get a generated essay in response, referred to as a text-to-text or text-to-essay style of generative AI, for my analysis of how this works see the link here. The usual approach to using ChatGPT or other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur.

Some believe that maybe generative AI has hit the wall.

Part of this wall-hitting could be due to the lack of added benefits of making generative AI larger and larger. Ergo, the two questions I posed a moment ago are on the minds of those that research and build the latest in generative AI. Likewise, investors that put money into generative AI startups are anxiously wondering the same thing. If they are pouring precious venture capital funds into generative AI seedlings, they might not get the later windfall they expect.

Bigger might not be any better.

The wall is the wall.

I’ll repeat and restate those two questions I previously posed, now in a more suitable context:

  • 1) Have today’s capabilities underlying generative AI hit the wall?
  • 2) Is the focus on scaling up to bigger and bigger generative AI a misleading misdirection such that bigger sizing won’t especially move the needle?

The first question ponders whether generative AI as we know it today might be nearing a proverbial wall in terms of not getting much better than it is right now. The fluency and capabilities of generative AI could already be here and regardless of what we do next, we have reached the end of the road. For the second question, which is directly related to the first one, the size of current generative AI is said to have peaked and larger sizes will not gain us much if any noticeable traction.

I will unpack all of this in a moment, so bear with me.

The brouhaha was launched last week when various reporting about a talk by the CEO of OpenAI, Sam Altman, indicated that he said this: “I think we’re at the end of the era where it’s going to be there, like, giant, giant models.” Furthermore, he reportedly stated, “We’ll make them better in other ways.”

Reporters and pundits seemed to overemphasize the remark about being at the end of the larger-is-better era, meanwhile tending to underplay or omit the vital additional remark that there might be a means of improving generative AI in other ways.

Let’s make sure that we keep both of those co-dependent conditions simultaneously in mind.

I’ll add a bit of a twist to all of this.

In case you didn’t already know, there has been an ongoing undercurrent of concern about generative AI being devised by scaling upwards and doing so at seemingly exorbitant costs. Generative AI such as the widely and wildly popular ChatGPT consumed tons of bucks when being developed and continues to chew up dollars for each use on a daily and minute-by-minute basis. If people are playfully using ChatGPT and other generative AI, some have wondered aloud whether the juice is worth the squeeze.

For example, the number of computing resources consumed and the related qualms about environmental impacts is but one of many reasons to be counting our pennies on generative AI.

There is a now-classic research paper that was posted in 2021 that asked serious and sobering questions about the preoccupation with making AI bigger and bigger:

  • “In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models” (in a paper entitled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” by authors Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, per FAccT ’21, March 3–10, 2021).

The prevailing wisdom about generative AI has been that the larger they are, the more fluent they will seem during natural language interaction. Admittedly, this has been a relative truism. Larger generative AI has pretty much taken the world by storm. Prior years of smaller-sized generative AI have been left in the dust.

A presumed trajectory is that for each increment in size, we are getting in return a proportionate improvement in apparent fluency (maybe even an exponential factor). Up a step in size is producing a step up in fluency. But like a flight of stairs that might eventually reach the top floor, it could be that there is little or no further upward progress to be made.

It could be that each newest stepwise size enlargement gets us just teeny tiny improvements in fluency.

The size increase though is likely to be quite costly. In contrast, the added benefit is seemingly going to be marginal. Should you keep striving in that direction or is it an unwise tradeoff that unduly consumes energy and costs without a reasonable return to be had? Worse still, suppose that the improvement is so negligible that it is rounded off to being essentially zero in magnitude. You are tossing big bucks into the input and getting almost nothing added in return.

Not an astute proposition.

If you look around at the mania spurred by ChatGPT, you’ll observe that numerous AI research labs, think tanks, startups, and larger firms are all tending toward the bigger is better mantra. They are eagerly cobbling together massive computing resources and making cloud providers rich by striving toward larger generative AI or LLMs (the insider phrasing is that generative AI is considered a form of LLMs or Large Language Models, such that these are computational and mathematical models of natural language that are devised in a large sense to encompass broadly human verbal interactions).

What do I mean when saying that generative AI is getting larger?

Allow me to elaborate.

Three major components pertain to the generative AI enlargement pursuit:

  • 1) Model size. Computational models are composed of various networked linkages and parameters.
  • 2) Dataset size. Volume of data used for data training of the model.
  • 3) Computer resources. Cycle speed and computing are used to build and test the generative AI, along with when consumed during the use of the resultant generative AI app.

As a flavor of the model sizes involved, the ChatGPT you are using or are aware of is based on an OpenAI model known as GPT-3.5. The GPT-3.5 is a variant of the earlier GPT-3. In turn, GPT-3 is a variant of the earlier GPT-2. And in turn, GPT-2 is a variant of GPT-1. An easier way to think of this is that first in their series there was GPT-1, which was recast and redone into becoming GPT-2, and this later led to GPT-3.

Various published indications suggest that the number of parameters associated with the respective lineage consists of this:

  • GPT-1: 110 million or so parameters (i.e., millions in size)
  • GPT-2: 1.5 billion or so parameters (i.e., low billions in size)
  • GPT-3: 175 billion or so parameters (i.e., hundred billion in size)

Without quibbling on those exact counts, the gist is that the number of parameters in terms of model size has risen from around one hundred million to a tad over a billion, and then risen again to well over a hundred billion in size. These are each an order of magnitude increase in size.

Where is this heading, you might be asking?

The avid pursuit of larger and larger models by firms or entities with the wherewithal to do so has gotten the latest generative AI toward sizes in the one-half trillion to somewhat over a trillion parameters in size. Whether those magnitude jumps can showcase a taller order of natural language fluency has yet to be fully vetted.

To clarify, we do not yet know that the larger sizes won’t provide substantive added boosts to generative AI. Nor do we know that it will. We just don’t know for sure either way.

This is an open question.

One viewpoint is that we won’t know until we get there.

This is somewhat based on the history of how things have been coming along. When using GPT-1 in the earlier days, it was obvious that the generative AI was not seemingly fluent. GPT-2 was better, but not enough to write home about. GPT-3 was definitely an improvement. The trend has been that larger is better.

For all we know, the next breaking point of witnessing a remarkable improvement might be at say the multi-trillions stage. If we can get to an order of magnitude of hundreds of trillions, perhaps doing so will knock our socks off.

The doom and gloom crowd would argue that this is perhaps a pipedream. You will bear the enormous blood, sweat, and tears, for which the multi-trillion parameter-sized generative AI might be about the same in apparent fluency as the ones at the multi-billion or less. Yikes, those burdensome costs to get to the multi-trillions might have been for not. A total waste, some contend. A fool’s errand.

Few would seem to be desirous of pushing into the stratosphere and chewing up the dollars to do so, meanwhile finding themselves relatively empty-handed. Gosh, those old-time multi-billion parameter-sized generative AI are nearly as good as the shiny new multi-trillion parameters sized ones. It would be like building a hugely expensive jet plane that turns out could only fly as fast as the older style prop planes.

Embarrassing.

Infuriating.

Downright outrage by those that put money into the dreamy pursuit.

On the other hand, since we don’t know whether the larger size will fly or flop, there is a potential pot of gold that serves as an alluring attraction. Suppose that the bigger generative AI could run circles around the older smaller generative AI. Those that had made bucks with their older smaller generative AI would be behind the eight ball. They would become relics of the past.

Here’s another facet to noodle on.

Some believe that the path to Artificial General Intelligence (AGI) is via larger and larger generative AI or LLMs.

Therefore, if you are desirous of someday attaining sentient AI, also known as AGI, you have to keep pushing the boundaries and climbing up that hill. If we decide to stay at the billions of parameters, perhaps we won’t ever get a solid glimpse of AGI. It could be that when we get to the multi-trillions or higher, AGI starts to be a beam of light that we can see at the end of the tunnel.

That AGI debate can go two ways.

One perspective is that if all we need to do is leap forward in the sizing department for generative AI, this ought to be carefully and mindfully pursued so that we don’t surprise ourselves by recklessly falling into AGI. There are all those worrisome existential risks associated with AGI, see my coverage at the link here. The simple yet somewhat beguiling idea is that while playing with generative AI that is in the multi-trillions or higher, we find that AGI is emerging and we aren’t ready to handle it.

Oops, we spark an AI singularity such that the AI suddenly emerges as sentient and opts to enslave humankind or wipe us out. Not a promising prospect.

The other side of that AGI coin is that we falsely assume that the larger size is going to move us toward AGI. We spend all manner of attention and effort in that better is bigger direction. Darn it, we find that the larger size has nothing to do with reaching AGI. Had we been more circumspect, we could have used that time and those resources for something that really would have attained AGI.

Some argue that AGI is going to have a good side to it. AGI might be able to discover a means to cure cancer. There are lots of postulated benefits from AGI. As such, if we have gone down a rabbit hole and unduly wasted our AGI pursuit, doing so by having steamed ahead on larger-sized generative AI, those hoped-for benefits remain elusive and further beyond our reach.

This conundrum about hitting the wall and whether bigger is better has a lot more nuanced and tricky sides to it.

Lots more to unpack.

Vital Background About Generative AI

Before I get further into this topic, I’d like to make sure we are all on the same page overall about what generative AI is and also what ChatGPT and its successor GPT-4 are all about. For my ongoing coverage of generative AI and the latest twists and turns, see the link here.

If you are already versed in generative AI such as ChatGPT, you can skim through this foundational portion or possibly even skip ahead to the next section of this discussion. You decide what suits your background and experience.

I’m sure that you already know that ChatGPT is a headline-grabbing AI app devised by AI maker OpenAI that can produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application. This type of AI is classified as generative AI due to generating or producing its outputs. ChatGPT is a text-to-text generative AI app that takes text as input and produces text as output. I prefer to refer to this as text-to-essay since the outputs are usually of an essay style.

Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.

There are four primary modes of being able to access or utilize ChatGPT:

  • 1) Directly. Direct use of ChatGPT by logging in and using the AI app on the web
  • 2) Indirectly. Indirect use of kind-of ChatGPT (actually, GPT-4) as embedded in Microsoft Bing search engine
  • 3) App-to-ChatGPT. Use of some other application that connects to ChatGPT via the API (application programming interface)
  • 4) ChatGPT-to-App. Now the latest or newest added use entails accessing other applications from within ChatGPT via plugins

The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.

I and others are saying that this will give rise to ChatGPT as a platform.

As noted, generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

There are numerous concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that President Abraham Lincoln flew around the country in a private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

Into all of this comes a slew of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.

Grappling With The Wall

We are ready to further unpack this thorny matter.

The remarks by the OpenAI CEO about having reached the end of the era regarding bigger and bigger models have elicited all manner of commentary. There are critics of the remarks. There are skeptics. Some applaud the remarks and insist that it is about time that the zany race toward larger and larger generative AI is brought to a screeching halt.

Let’s examine some of the more pronounced and insight-bearing angles.

I’ll cover these ten salient points:

  • 1) Genuine and forthright as a significant wake-up call
  • 2) Sneaky attempt to lull competitors into complacency
  • 3) Doesn’t want to spend the bucks and proffers a handy excuse
  • 4) Worried about losing their existing advantage
  • 5) Aim first for compacting and then aim for enlarging
  • 6) Hinting that revamping the core premise is going to be costly and longer-term
  • 7) Secretly worried that AGI and existential risk are on the prevailing path
  • 8) Realizes that bigger AI is a bigger target by lawmakers and the public
  • 9) Trying desperately to stay ahead of a dead-end and inflated expectations
  • 10) Other

Put on your seatbelt and get ready for a roller coaster ride.

1) Genuine and forthright as a significant wake-up call

Maybe we can take the remarks for what they straightforwardly indicate.

It could be that a sincere belief is at play. The belief is that going bigger isn’t the path ahead. We need to shake ourselves off of the narrow assumption that all we need to do is go big. A wall is in front of us. Everyone is pell-mell driving at breakneck speed toward that wall.

Someone must wake us all up.

Who better to do so than the head of the widely and wildly successful ChatGPT? Seems like we ought to appreciate the handy wake-up call. Full stop, period.

2) Sneaky attempt to lull competitors into complacency

An alternative interpretation is that this is clever and shall we say sneaky form of misdirection.

How can you get your competitors to stop or curtail their madcap push toward bigger generative AI?

Easy-peasy, simply tell the world that it is a lost cause. The competition will either drop their efforts because of the proclamation, or they will at least lose support from their stakeholders and shareholders. It will put the competition back on their heels, having to justify why they are spending toward a false hope.

The beauty of the remarks is that since no one can say for sure whether the pronouncement is right or wrong, it can stand aloft and withstand criticism. It was what was believed at the time that it was uttered. Sure, maybe later on if someone else comes out with a larger-sized generative AI that seems a lot better, this might be a bit of egg on the face (some have likened the remarks to those statements made in the early days of PCs that nobody would have one in their homes, or that computers will always be the size of a refrigerator, etc.).

But at the time, perhaps the confusion sowed amidst your competition was well worth that later oopsie realization. Maybe no one will even remember that the remarks were stated anyway.

Competitive ingenuity at work, some implore.

3) Doesn’t want to spend the bucks and proffers a handy excuse

One theory is that OpenAI doesn’t want to continue spending through the nose as part of the death march toward larger and larger generative AI.

How can they cut back on that spending?

The ripe answer is by claiming that bigger isn’t better. If the world buys into that assumption, there is no need to keep consuming all those computer processing cycles. In one fell swoop, you have gotten yourself out of a bind.

This might seem credible as an explanation, except for the seeming fact that OpenAI has gotten a humongous influx of money via the Microsoft deal. Were it not for that, this might be a plausible ploy, otherwise it seems farfetched.

4) Worried about losing their existing advantage

Suppose that you had managed to climb ahead of the pack and found yourself at the top of the hill. You would relish that lofty position.

Glancing around, you might notice there are other even higher hills around you. Maybe you want to stay at the top of your existing hill. Maybe you don’t want others to try and climb those other hills and get higher than your position.

To avoid losing your existing advantage, you try and convince others that your hill is the topmost of the hills. They are not going to believe you, especially when they can see the other hills with their own eyes.

In that case, claim that the other hills are bad and not worthy of climbing. They are barren wastelands. They have nothing profitable to offer.

Sun Tzu taught us via The Art of War this crucial dictum: “The supreme art of war is to subdue the enemy without fighting.”

5) Aim first for compacting and then aim for enlarging

Something that is already gaining attention in the AI field entails trying to compose smaller LLMs or generative AI that have as much potency as the larger ones do.

It could be that today’s generative AI is unnecessarily bloated. Some suggest that there is unneeded fluff. Junk and extra stuff might be embedded in there.

You know how things go when you first devise something new. Everything including the kitchen sink is sometimes tossed into the mix. After having gotten something valuable out of the concoction, you can go back and discern what works and what is not essential.

A tightening of the belt can take place.

A similar notion was expressed in the research paper I earlier cited, namely this:

  • “Alongside work investigating what information the models retain from the data, we see a trend in reducing the size of these models using various techniques such as knowledge distillation, quantization, factorized embedding parameterization and cross-layer parameter sharing, and progressive module replacing” (ibid).

Cell phones have gotten smaller, faster, and more feature rich during the evolution of cell phone technology. Cars have likewise done the same. It just makes sense that we are probably able to undertake compacting measures to get as much out of smaller generative AI as we can from today’s larger-sized generative AI.

The thinking is that if we find useful ways to compact generative AI, a non-compacted version of today that is a trillion parameters in size might be equally achieved in a billion parameters. That leaves us a lot of headroom. We can focus on compacted generative AI and aim for the trillion anew, possibly then arriving at whatever advantages there are to multi-trillion parameters in the older ways of doing things.

6) Hinting that revamping the core premise is going to be costly and longer-term

Here’s another take on this.

Let’s assume that the wall is ahead of us. As a CEO, you are mulling over having to rejigger all the prior work that went into your goldmine. This could be costly. It could take time. The spotlight though is upon you to bring a new rabbit out of a hat, doing so while the world is waiting with bated breath.

A trail of breadcrumbs has got to be started. Allow the world to gradually and slowly realize that there isn’t a rabbit waiting around for that next trick. You need to go out and find something as astonishing as a rabbit and a hat.

If you came out directly and said this in plain language, the chances are that the world would stomp on you. Their expectations are through the roof right now. Your best bet is to ease the world into the hardened reality that things are going to take time. And be costly.

Hints aplenty might buy the needed breathing space.

7) Secretly worried that AGI and existential risk are on the prevailing path

I mentioned earlier herein that some believe today’s generative AI path is heading toward AGI.

Okay, if so, maybe the remarks were a secret means of warning us about the possible upcoming dangerous future. Steer us away from that dreaded potentiality. Mention that bigger isn’t better. That could dissuade people from pursuing bigger-sized generative AI. In turn, that might keep us from inadvertently landing on AGI and those perilous existential risks.

Wait for a second, some contend, why not just say so? Why be so sly about it?

The retort is that imagine the visceral emotional reaction by society if we were all told that sentient AI is found at the larger sizing of generative AI. Some would decry this and do whatever they could to put the kibosh on larger generative AI. Others might accelerate their building efforts under the upbeat belief that attaining AGI will be the best thing since sliced bread.

No sense in starting an avalanche and spurring chaos. Help society by quietly steering away from the believed imminent hazards up ahead.

8) Realizes that bigger AI is a bigger target by lawmakers and the public

A practical consideration is that the larger that generative AI gets, the greater the chances to some degree of exposing privacy intrusions and other legal maladies. See my coverage at the link here and the link here, just to name a few.

Thus, wanting to avoid a legal morass of new AI Laws and regulations, it might be sensible to keep generative AI to its existing size. Deal with legal issues of that size. Do not tempt to awaken the 600-pound gorilla of governmental intervention.

The thing is, this is likely to happen anyway and the bigger size per se won’t be the triggering mechanism, as I’ve discussed at the link here.

9) Trying desperately to stay ahead of a dead-end and inflated expectations

This perspective is that such remarks are a sure sign of desperation.

Some vehemently argue that ChatGPT has gotten more than its fair share of fifteen minutes of fame. In addition, there is expressed doubt that they can do anything to top it.

Think of this as a blockbuster movie. You make one. It takes the world by surprise. When you try to do a sequel, the odds are that no matter what you do, the original will remain heralded and the sequel will be a disappointment.

Get in front of those outsized expectations by trying to bring down the expectations, if you can.

10) Other

You can come up with a lot of other possibilities.

In addition, the aforementioned avenues can be combined or intermixed. All manner of permutations and combinations can be concocted.

Conclusion

A final thought for now.

The odds are that we are going to see lots of efforts undeterred by the bubbling notion that bigger isn’t better. They are going to pursue bigger and believe that bigger is the right path. In one sense, it is “easier” to do the bigger is better pursuit than trying to rejigger existing methods.

Get enough money and enough computer processing power. Toss hardware at it. Then hope for the best.

I say this to emphasize that there is not a one size fits all stipulation to generative AI.

In other words, while some are getting bigger, others are earnestly seeking to rejigger. This might involve compacting. This might involve inventing different under-the-hood structures. A slew of new approaches is being hatched each day. On top of that, those new approaches can also inevitably shift into the bigger is better camp too.

We might reflect mindfully on this handy quote by Sun Tzu:

  • “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

One takeaway is that those in AI ought to at least be cognizant of which path they are taking. Do not blindly move ahead. Be aware of the tradeoffs in whichever avenue you are pursuing. Those in AI that do not know themselves nor their enemies (competitors), they will undoubtedly and indubitably lose out in the battle (contest) toward AI advancement.

Said like a true AI warrior.

Source: https://www.forbes.com/sites/lanceeliot/2023/04/26/openai-ceo-suggests-that-chatgpt-and-generative-ai-have-hit-the-wall-and-getting-bigger-wont-be-the-way-up-raising-eyebrows-by-ai-ethics-and-ai-law/