Is It True That Generative AI ChatGPT Will Flood The Internet With Infinite Content, Asks AI Ethics And AI Law

Do you perchance know the inspirational children’s book A Fish Out of Water?

The enchanting book was written by Helen Palmer (real name Helen Palmer Geisel) and was based on a short story by Dr. Seuss (real name Theodor Geisel). The husband-and-wife team produced a now legendary contribution to children’s literature, delighting youngsters everywhere.

In case you are unfamiliar with the plot or need a refreshment of it, allow me to briefly summarize. A boy buys a goldfish from his local pet store. He is sternly instructed to never overfeed the tiny sea creature. You never know what might happen if you do so.

The boy inadvertently overfeeds his goldfish, just once, but this triggers a staggering amount of unbridled growth.

Things begin to go quite awry.

The once tiny fish quickly outgrows its fishbowl and gets so large that the boy puts the beloved pet into a bathtub in the house. The fish keeps growing and growing. This seems to be unstoppable.

Soon, the police and the fire department come to the boy’s aid and transport the now elephant-sized goldfish to the local public pool. Ultimately, the pet store owner arrives and manages to shrink the goldfish back down to normal size. We don’t know how this magical feat was achieved. The boy is cautioned again to avoid overfeeding.

Lesson learned, the hard way.

We might need to heed this same harrowing lesson when it comes to the future of the Internet.

How so?

Today’s reality is that we might have devised a form of Artificial Intelligence (AI) that is going to expand and fill the Internet with a massive and unending torrent of data. There is a lot of handwringing that Generative AI, the hottest AI in the news these days, will do just that.

Generative AI is able to generate or produce outputs such as text with nary just a simple prompt entered by a human user. A complete and extensive essay can be generated via a few well-chosen words. You might be aware of generative AI due to a widely popular AI app known as ChatGPT that was released in November by OpenAI. I will be saying more about this momentarily.

Some have been fervently warning that generative AI can be used to create a seemingly infinite amount of content.

One person can easily leverage generative AI to produce many thousands of essays in merely a single online session, doing so with minimal labor on their part. The person could then opt to post the generated essays on the Internet. Imagine this done at scale. In essence, go ahead and multiply this by the millions upon millions of Internet users. A veritable tsunami of generated content can be readily produced and posted.

Rinse, repeat, doing so incessantly, day after day, minute by minute.

Is this a sky-is-falling jittery claim or does it have valid merit?

In today’s column, I will be addressing these expressed worries that we are facing a future of an Internet completely clogged and swamped by generative AI content. We will look at the basis for these qualms and consider some potential upsides that aren’t usually stated. I will be occasionally referring to ChatGPT during this discussion since it is the 600-pound gorilla of generative AI, though do keep in mind that there are plenty of other generative AI apps and they generally are based on the same overall principles.

Meanwhile, you might be wondering what in fact generative AI is.

Let’s first cover the fundamentals of generative AI and then we can take a close look at the pressing matter at hand.

Into all of this comes a slew of AI Ethics and AI Law considerations.

Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.

Fundamentals Of Generative AI

The most widely known instance of generative AI is represented by an AI app named ChatGPT. ChatGPT sprung into the public consciousness back in November when it was released by the AI research firm OpenAI. Ever since ChatGPT has garnered outsized headlines and astonishingly exceeded its allotted fifteen minutes of fame.

I’m guessing you’ve probably heard of ChatGPT or maybe even know someone that has used it.

ChatGPT is considered a generative AI application because it takes as input some text from a user and then generates or produces an output that consists of an essay. The AI is a text-to-text generator, though I describe the AI as being a text-to-essay generator since that more readily clarifies what it is commonly used for. You can use generative AI to compose lengthy compositions or you can get it to proffer rather short pithy comments. It’s all at your bidding.

All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. There are other modes of generative AI, such as text-to-art and text-to-video. I’ll be focusing herein on the text-to-text variation.

Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

There are numerous concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

We are ready to move into the next stage of this elucidation.

Looking At What Generative AI Might Do To The Internet

Now that you have a semblance of what generative AI is, we can explore the vexing question of whether this type of AI is going to cause chaos and bedlam via a bloating of the Internet.

Here are my eight vital topics pertinent to this matter:

  • 1) Size of the Internet
  • 2) Indexing of the Internet
  • 3) Gauging What Is Generative AI-Produced Content
  • 4) What’s Wrong With Generative AI Content Anyway
  • 5) Will People Post Generative AI Content To The Internet
  • 6) Maybe Paywall Approaches Will Be Revered
  • 7) The Multi-Modal Morass Generative AI Awaits
  • 8) Vicious Or Virtuous Cycles Of Generative AI

I will cover each of these important topics and proffer key considerations that we all ought to be mindfully mulling over. Each of these topics is an integral part of a larger puzzle. You can’t look at just one piece. Nor can you look at any piece in isolation from the other pieces.

This is an intricate mosaic and the whole puzzle has to be given proper harmonious consideration.

Size Of The Internet

One of the first aspects to be considered consists of the size of the Internet.

This is particularly important. The claim that is being made about generative AI is that it will apparently enormously bloat the Internet. We will have all manner of added content due to the ease of employing generative AI to churn out massive volumes of digital materials. If so, the logically sensible question entails how big the Internet is today, along with how much might generative AI spew forth additional content that otherwise would not have been on the Internet.

Trying to get a handle on the size of the Internet is unfortunately quite difficult and immensely imprecise.

One estimate that was posted on Finance Online suggests that the Internet currently is at least 74 zettabytes (ZB) in size and will potentially reach 463 ZB by the year 2025 (note that the forecasted growth does not seem to explicitly take into account generative AI as a factor per se and merely assumes all else is equal in deriving this projection).

There are lots of other estimates of the existing size of the Internet. Likewise, there are lots of other estimates of the expected growth in size. I don’t want to get bogged down in arguments over such numbers and am just seeking to emphasize that the Internet is undoubtedly mammoth in size. Furthermore, it is worth noting that all reasonable expectations are that the Internet will, in the normal course of events, continue unabashedly on its skyrocketing growth path.

You might also find of interest that Statista has posted various statistics suggesting that there are presently around 5.16 billion Internet users. This is calculated as representing 64.4% of the global population. Are you surprised? On the one hand, we might naturally assume that most people would indeed be on the Internet. This though is somewhat skewed from an insider’s perspective because many people do not have ready access to the Internet or otherwise are unable to garner access. In any case, the expectation is that Internet access will ultimately get less expensive and become even more widespread, thus the number of Internet users will indubitably rise.

I am dragging you through those statistics to bring us to a very crucial question.

How much will generative AI add to the existing and ongoing growth of the Internet?

That’s what we want to know. You see, the claim about the impacts of generative AI seems to take at face value that of course, generative AI is going to flood the Internet. All of that is a bit of handwaving if you conveniently or on an absent-minded basis avoid discussing actual numbers and true counts of things.

Take for example the general assumption that the Internet is somewhat around 100 ZB in size and growing. If you believe that generative AI is going to add perhaps 1 ZB per year, this is a drop in the bucket of the overall magnitude of the Internet.

Generative AI would be akin to splashing a pebble into a vast ocean.

That doesn’t seem to fit the prevailing narrative on this weighty topic. Some have passionately speculated that we might end up with 10% of the Internet being on a “normal” user-generated basis and the remaining 90% will be due to generative AI-produced content.

There doesn’t seem to be a sound basis for this contention, it is seemingly concocted out of thin air. Assume anyway that this occurred. If we take the existing 100 ZB as a base and assume it is essentially all user-generated content (well, that’s debatable), it means that we would have to find ourselves looking at a 1,000 ZB-sized Internet. That’s 900 ZB of generative AI-produced content and 100 ZB of user-generated content.

We would have taken today’s ocean of presumed by-hand content and somewhat dwarfed it in comparison to the totality of the generative AI-produced Internet seas.

Speculation upon conjecture.

So, which shall it be?

Are we going to have generative AI produce a pebble or will it multifold increase the size of the Internet?

Nobody can say for sure either way. We should be exploring those key numbers in a serious vein so that discussions on the topic are rooted in something tangible. Not doing so makes the chatter a bit vacuous and almost like the boy that cried wolf.

Let’s consider the next factor, and do keep in mind that all of these factors are interrelated and must be considered as a collective and not simply on an individual basis.

Indexing Of The Internet

You likely realize that when you do an Internet search, you are using someone’s search engine that has been attempting to routinely index the contents of the Internet. I’m betting that you might be under the impression that you are gaining access to the preponderance of the Internet when you use a popular search engine.

That’s highly unlikely.

Some estimates are that only a fraction of the Internet has been indexed, perhaps less than 1% or so (some say it is up to 5% or maybe slightly higher; it isn’t at the level that most people generally assume such as say 50% or 90%). Again, these numbers vary but are nonetheless relatively quite small. The gist is that you are almost always unaware of a huge proportion of the Internet.

Why is that significant in this context?

Because the added content that generative AI will presumably produce is potentially going to be subject to a similar indexing consideration. It could be that almost none of the added content will be indexed. In that case, you probably won’t ever see it.

The other side of the coin supposes that such “artificial” content will be indexed and done to the regrettable lack of attention to “conventional” content. An argument goes that the indexes will be preoccupied with the generative AI content and will neglect the conventional content. Thus, even if the generative AI content isn’t overwhelming the Internet, it will seem like it is due to the disproportionate indexing of such content.

In the end, it could be that trying to find conventional content will be like trying to find a needle in a haystack. The enormous clutter of the generative AI-produced content will be akin to overwhelming oversized and outstretched bales of hay. Somewhere in there will be those precious tiny gems of conventional content if you can find them.

You might immediately be thinking that the index makers ought to be figuring out how to deal with this dilemma. If they can do the indexing in the “right way” then it pretty much doesn’t matter how much generative AI content gets produced. It will sit in the side streets and alleyways of the Internet and not especially see the light of day anyway.

Let’s continue our exploration to see how this indexing issue further arises.

Gauging What Is Generative AI-Produced Content

Okay, if generative AI is going to go hog-wild and produce tons and tons of Internet content, we logically can cope with this as long as we can distinguish such content from “conventional” content.

Seems easy-peasy as a solution.

Any search engine that does indexing would merely detect whether the content is generative AI produced versus conventionally produced. The index could then either opt to not include the generative AI materials or mark in the index that the content is from generative AI. Users of such a search engine could then specify during a search whether they want to encompass the generative AI content or skip it.

Case closed.

Sorry to say that this isn’t especially viable.

Here’s why.

Trying to distinguish generative AI outputs from conventional content is not easy and almost ultimately going to be impractical. I’ve covered in my column that those alleged detection apps are a false promise and essentially a misleading charade, see the link here.

In brief, the AI makers of generative AI keep enhancing their AI to produce content that is by design indistinguishable from conventional human-generated content. That’s an intentional goal. The detection apps are faced with a continual cat-and-mouse gambit. Furthermore, those detection apps are based on all manner of assumptions about what distinguishes generative AI outputs, though those assumptions are often incorrect or only based on probabilities. The end result is that any detection app is only guessing the likelihood and is not able to assuredly make an ironclad indication.

Bottom-line is that we are unlikely to be able to determine what is generative AI content unless there is some clearcut indication provided by the generative AI provider, though that is not ironclad either. Again, see my coverage of this complex topic, discussed at the link here. The idea being pursued is that a watermark would be secretly included in the generated content. You could in theory use the watermark to ferret out whether the content was via generative AI. The downside is that with various changes to the output, it will be relatively easy to mess up the watermark. The content will then fail to abide by the watermark and the signpost that was supposed to tip us is now defeated.

Some believe that we need new AI laws to deal with this. Make laws that require generative AI apps to include watermarks. In addition, make it unlawful to try and defeat those watermarks. This might be the only means to curtail those cat-and-mouse techie games. I’ve examined those proposals in my column and pointed out that though the precepts sound reasonable, the devil is in the details of implementing these schemes and enforcing these policies.

All in all, returning to the concerns about the bloating of the Internet via generative AI content, we aren’t, unfortunately, going to be able to whisk away the issue by simply noting what is generative AI content versus what is not. The problem is harder than that.

What’s Wrong With Generative AI Content Anyway

All of this concern about the tsunami of generative AI-produced content is usually predicated on one rather essential assumption, namely that the content will be faulty.

If the content is good, we presumably should be pleased with the added postings to the Internet. Sure, the volume might be high, but if the information being posted is worthwhile then it is simply a matter of having more good stuff to sift through. The more the merrier, as they say.

The key consideration entails whether or not the generative AI-produced content will be informative versus perhaps filled with errors, falsehoods, misinformation, disinformation, and the like. This brings up several facets.

First, it could be that generative AI will be further advanced such that the chances of producing foul-outputted essays are extremely low. We would seemingly be remiss if we wanted to somehow ban all generative AI from being Internet posted, assuming that by and large, the generative AI-outputted essays are reasonably correct most or the preponderance of the time. Wishing to reject all outputted essays would be akin to the classic tossing out the baby with the bathwater (an old saying, probably nearing retirement).

Second, as I’ve discussed in my column at the link here, there is a rising interest in AI add-on apps that can do double-checking generative AI-outputted essays. The AI double-checkers could be used before people post generative AI content to the Internet. Even if people don’t pre-screen the content that they wish to post, the same tools can be used on already posted content. In short, double-checking can be done regardless of what the content source is, such that we should naturally remain suspicious of human-generated content too.

Third, as alluded to in my aforementioned point, the belief often seems to be that human-generated content is always good, while generative AI content is always bad. A nutty false assumption. There is plenty of human-generated content that contains all manner of errors, falsehoods, and made-up junk. We are not safe merely because a human happened to create content by hand.

All content, whether human-devised or generative AI devised, needs to be subjected to scrutiny.

Will People Post The Generative AI Content To The Internet

Another factor to consider is whether people are indeed going to post generative AI content to the Internet, and if so, at what magnitude.

Here’s what I mean.

People are using generative AI such as ChatGPT for a wide variety of purposes. They might use generative AI to stimulate ideas about a problem they are facing. They might use it to do research. They might use it to provide a draft of material that they intend to edit and then send it to someone via email. And so on.

The crux is that a lot of generative AI use might have nothing whatsoever to do with someone aiming to post the resultant outputted essays onto the Internet. We seem to often fall into the trap that just because someone uses generative AI, they are desirous of flooding the Internet with the outputs produced.

We don’t yet know how much of the time people will use generative AI for their own uses and ergo opt to not post the outputs to the Internet.

To clarify, I am not suggesting that people won’t be posting generative AI outputs to the Internet. They most certainly will. People that are doing online blogs will undoubtedly make use of generative AI. Many uses of generative AI to produce content for the Internet are assuredly going to occur. Etc.

Thus, one consideration is that we might not have as much generative AI content getting posted to the Internet as might otherwise be assumed will occur. For those pundits assuming that we are looking at a nonstop unbridled all-hands posting data apocalypse, we don’t know if that’s what is going to happen. Of course, even if only a modicum of people opts to do such postings, this could still be a tremendous amount of added content being heaped onto the Internet.

A twist is whether the generative AI outputs will potentially be automatically posted to the Internet.

This is an easy trick to pull off. You can simply make it so that any output from your generative AI app gets straightaway posted to the Internet. You can even put this into a loop. Have a series of prompts that are pre-canned. Feed those into a generative AI app. The generative AI app is programmed to immediately post the outputted essays to the Internet.

Voila, you have a perpetual motion machine for generating data content for the Internet.

Where though are the postings going to go?

Any websites or other online locales that allow the posting of this type of machine gun-spewing content are potentially going to be held accountable for what they are allowing to arise. Presumably, people will avoid those sites. Or those sites will be earmarked by search engines and indexing algorithms. The aspect that generative AI content gets posted is one aspect, while another equally crucial aspect is where the postings will land.

Maybe Paywall Approaches Will Be Revered

A commonly voiced assertion is that we will eventually become weary of the Wild West of the Internet. People will gravitate toward trusted online sources. They will purposely avoid other sketchy or unknown areas of the Internet.

Along those lines, the thinking goes that people will be willing to pay to access trusted sources. Whereas today there is still a huge debate about the profitability of paywalled content, the flood of generative AI content is considered a boon for the paywall philosophy. The worse that things get in terms of finding trustworthy content on the Internet, the more valuable the paywalled content becomes (assuming, of course, that the paywalled content is more mindfully scrutinized).

The irony partially is that the content behind the paywall might consist mightily of generative AI-produced content. Assuming that the added value is that the paywall provider is screening the content, they are essentially doing the double-checking that I earlier mentioned. They don’t have to necessarily generate the content. They just need to ensure that the content is worthy of trust.

There are disagreements about this predicted future. Perhaps, in lieu of paywalls, you have to encounter ads or sponsor notifications, and doing so gets you to the trusted content. Many other possibilities exist.

The Multi-Modal Morass Of Generative AI Awaits

I have been focusing herein on text-related generative AI. That is the text-to-text or text-to-essay variety of generative AI, such as ChatGPT.

One of my predictions has been that we will soon find ourselves awash in multi-modal generative AI, see my explanation at the link here. We already are witnessing text-to-images, text-to-audio, text-to-video, and other variants of the types or modes of outputted results from generative AI. The next step is you will be able to get multi-modal outputs.

For example, you enter a prompt into generative AI and ask about Abraham Lincoln. The generative AI produces an essay for you. In addition, several images are generated of Lincoln, showing him in poses that heretofore had not been posted or published. An audio transcript is generated that has what seems to be a Lincoln-like voice. A video is generated that showcases the essay, including a montage of pictures and images that go along with the outputted text.

Welcome to the world of multi-modal generative AI.

Exciting, for sure.

But maybe not quite so exciting if you believe that this is further fodder as content that can be posted to the Internet.

In essence, we won’t be fretting solely about the text that might be erroneous, we also will need to do the same for all other modes of output. Audio files should be suspected as containing falsehoods, images might falsely portray matters, and videos are also going to be worrisome.

If you hadn’t already included in your calculations about the bloating of the Internet the multi-modal conflagration, you might want to rachet up your numbers and your handwringing.

Vicious Or Virtuous Cycles Of Generative AI

I’ve got a factor for you that might cause a bit of mind-bending. Hang on.

In this saga of the flooded Internet, we assume that generative AI is the villain. Generative AI is how all this error-prone and made-up content is going to be produced. Generative AI is bad to the bone.

Suppose though that we look at this in a different light.

It could be that generative AI is able to produce the most strident and strongest valid content. Meanwhile, the content generated by the human hand is construed as much less trustworthy. The generative AI as a baddie shift into generative AI as the hero.

Think about that.

I’ve got another fun twist for you.

Let’s assume that generative AI is being data trained via content that is on the Internet. If we make the assumption too that generative AI content is going to be posted to the Internet, either by human choice directly or via an automatic mechanism, we are going to find ourselves enmeshed in an intriguing cycle.

The content produced by generative AI becomes the source material for further data training in generative AI. A spiral occurs. More and more generative AI-produced content is posted to the Internet, which was based on data training of content already produced by generative AI.

What does this echo chamber of “generative AI feeding into generative AI” eventually do to the Internet and humankind all told?

One viewpoint is that this is a horrid race to the bottom. Errors in generative AI outputs will get magnified. Each new iteration of generative AI will consume the prior errors and repeat them, again and again. At some point, the chances of figuring out where the errors are will be daunting. Dismal. Disheartening.

Another viewpoint is that if generative AI can be devised to produce valid outputs, you might have an Internet cleaning mechanism that helps to spruce up the Internet. When the generative AI encounters something erroneous, whether produced by AI or by human hand, the generative AI will seemingly detect and overcome this falseness. With generative AI doing this over and over again, it is as though you are constantly mowing the lawn and effectively reducing the nature and prominence of the weeds.

That might sound reassuring, except for the big and looming question of what precisely constitutes errors or falsehoods. This scrubbing machine could inadvertently cause valid content to be belittled or falsely accused of being error-prone. We need to be mindful of those false positives and false negatives when considering these types of mechanisms.

Will generative AI be a vicious cycle or a virtuous cycle?

Time will tell.

Conclusion

The numerous and at times panicky exhortations about generative AI swamping the Internet ought to be carefully examined. Lots of scenarios can readily be envisioned. Doom and gloom is not the only avenue. Anybody professing to predict what is going to happen should be upfront about the assumptions that they are making.

There are mitigating factors that will determine where the future of generative AI is going to go. AI Ethics and AI Law will have a decided hand in this, along with the overall perceptions of society at large.

A final remark for now.

Marcus Aurelius famously stated: “Never let the future disturb you. You will meet it, if you have to, with the same weapons of reason which today arm you against the present.”

Let’s make sure our reasoning of today can step up to the challenges of an AI-laden future.

Source: https://www.forbes.com/sites/lanceeliot/2023/02/23/is-it-true-that-generative-ai-chatgpt-will-flood-the-internet-with-infinite-content-asks-ai-ethics-and-ai-law/