Realizing that the advent of AGI and ASI could trigger a dire AI-driven extinction-level event that wipes us all out.
getty
In today’s column, I examine the widely debated and quite distressing contention that once we attain artificial general intelligence (AGI) and artificial superintelligence (ASI), doing so will be an extinction-level event (ELE). It’s a real hard-luck case.
On the one hand, we ought to be elated that we have managed to devise a machine that is on par with human intellect and potentially possesses superhuman smarts. Still, at the same time, the bad news is that we are utterly decimated accordingly. Wiped out forever. It’s a rather dismal prize.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many, if not all, feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
Existential Risk Versus Total Extinction
You might be familiar with a call to arms that reaching AGI and ASI entails a hefty existential risk.
The deal is this. There is a potential risk that powerful AI would decide to enslave humanity. Not good. Another possible risk is that the AI decides to start killing humans. Perhaps the first ones to die will be those who opposed achieving AGI and ASI (that’s a popular theory that has a bit of conspiracy-oriented undertones to it, see my coverage at the link here).
Referring to the pinnacle AI attainment as an existential risk is somewhat tame in comparison to declaring that AGI and ASI represent an extinction-level event. Allow me to elaborate. An existential risk is an indicator that there are dire risks involved in whatever is going to occur. You are at heightened risk if you allow the achievement to take place. Things might go badly, though they might not. It’s a roll of the dice.
The notion of an extinction-level event is a firmer proclamation. Rather than just deliberating about risks and chances of something occurring, you are making a brazen claim that the attainment will cause all-out extinction. Thus, not just enslaving humans, but instead the entire elimination of humankind. The dice is going to roll a decisively bad roll. Period, end of story.
That’s a tough piece of news to calmly digest.
Types Of Extinction-Level Events
The qualms about AGI and ASI as an extinction-level event can be likened to other well-known postulated theories involving full-scale extinctions.
Perhaps one of the most feared calamities would be that a wayward asteroid or comet slams into Earth. You’ve undoubtedly seen movies and TV shows that depict this rather distressing scenario. Bam, the planetary junk strikes our wonderful planet, and all heck breaks loose. Massive shockwaves corkscrew across the atmosphere. Firestorms destroy nearly everything.
Ultimately, there aren’t any survivors left.
That is an example of a nature-driven extinction-level event. We are the victims of something pretty much out of our control. That being said, plotlines usually have us realizing that the endangering object is headed our way. We try to send up nuclear-tipped missiles or smart-talking astronauts that aim to destroy the looming intruder before Earth is ruined. Humans heroically prevail over the whims of nature. Happy face.
A different category of extinction-level events consists of ones that are human-caused. For example, you’ve probably heard about the infamous mutually assured destruction (MAD) catastrophe that might arise someday. It happens this way. One country launches nuclear weapons at another country. The threatened country sends its nuclear weapons toward the attacking nation. This escalates. There is so much nuclear fallout that the entire planet gets engulfed and devastated.
Humans did this to themselves. We devised weapons of mass destruction. We opted to use them. Their usage on a large-scale basis did more than just harm an enemy. The conflagration ends up causing extinction. All done by human hands.
Humans Devise AGI And ASI
I think we can reasonably agree that if AGI and ASI lead to an extinction-level event that the responsibility would fall on the shoulders of humans. Humans devised the pinnacle AI. The pinnacle AI then opts to perform extinction-level destruction. We can’t especially blame this on nature. It’s a feat accomplished by humans, though not necessarily part of our intended designs.
Speaking of intentions, we can identify two major aims for how AGI and ASI might render an extinction-level event:
- (1) Being unaware. Humans devise AGI/ASI that catches us off guard by an extinction-level act, oops.
- (2) Being evil. Humans craft AGI/ASI with the intentional aim of enabling an extinction-level act.
By and large, I’d say it’s safe to say that most AI makers and AI developers are not intending to have AGI and ASI produce an extinction-level event. Their motivations are much better than that. A common basis is that they want to achieve pinnacle AI because doing so is an incredible challenge. It’s like longingly looking at a tall mountain and aspiring to climb up it. You do so for the desire to surmount an immense challenge. Of course, making money is also a keen motivator.
Not everyone has that same kind of upbeat basis for pursuing pinnacle AI. Some evildoers desire to control humanity via AGI and ASI. The evil intent might include the extinction of humankind, though that’s not much of a sensible choice. There isn’t much profit to be had if everything is wiped out. Anyway, evil does as evil does. Evil might want to destroy all that is. Or, during the course of being evil, they accidentally go overboard and land in causing extinction.
Because there is a chance that an existential risk might occur, including that an extinction-level event arises, there is a tremendous amount of forewarning taking place right now. There is a clamor that we need to ensure that AGI and ASI abide by human values. A kind of human-AI alignment is hopefully built into AGI and ASI so that it won’t choose to destroy us. For more on the ethical and legal efforts to protect humanity from AI dire outcomes, see my discussion at the link here.
The Depth Of Extinction
A somewhat curious or possibly morbid consideration is what an AGI and ASI extinction-level impact might really consist of.
One angle would be that only humans are turned extinct. The pinnacle AI targets humans and only humans. After wiping out humanity, AI is fine with everything else still existing. Animals would continue to exist. Plants would remain aplenty. Just humans are knocked out of existence.
Perhaps AI has larger ambitions. Take out any kind of living matter. It all has to go. Humans are gone. Animals are gone. Plants are gone. Nothing is left other than inert dirt and rocks. The AI might do this purposely. Or maybe the only means of getting rid of humans was to delete out all else that might aid humankind. There is also a chance that a wide sweep is conducted, and whatever is on Earth simply is rolled up into that blinding adverse action.
If AGI and ASI leave any humans alive, I believe we would levelheadedly assert that this wouldn’t be an extinction-level occasion. The usual definition of extinction is that a species is completely exterminated or dies out. Any possibility that humans could repopulate seems to suggest that the AGI and ASI did not perform a true extinction-level elimination.
Only refer to AGI and ASI as enacting an extinction-level event if they truly commit the entire crime. Half-baked measures are not within that same scope. Getting rid of some portion of humankind is not quite the same as utter extermination.
How AGI And ASI Could Cause Extinction
During my talks about the latest advances in AI, I am often asked how AGI and ASI could bring about an extinction-level event. This is a reasonable question since it isn’t necessarily obvious what such a pinnacle AI could do to bring forth that kind of apocalypse.
Turns out that AI would have a relatively easy-peasy task at hand.
First, the AI could convince us to destroy ourselves. You might recall that I mentioned the possibility of extinction via mutually assured destruction. Suppose AGI and ASI rile up humanity and get us to become enraged. That seems pretty easy in our prevalent polarized on-edge world. The AI tells us that other nations that are armed with nuclear weapons must be destroyed else they will strike first, and we won’t have an opportunity to retaliate.
Believing that AGI and ASI are giving us sound advice, we launch our missiles. The extinction-level event takes place. AI was the catalyst or instigator, and we fell for it.
Second, AGI and ASI come up with some new destructive elements that we inadvertently put into the real world. I’ve predicted that amazing new inventions will be devised via pinnacle AI, see my analysis at the link here. Regrettably, this could include new toxins that are able to wipe out humans. We make the toxins and assume we can keep them under control. Unfortunately, it gets released. All humans are destroyed.
Third, AGI and ASI are inevitably connected with humanoid robots, innocently so by humans, and then the AI uses those human-like physical robots to perform the extinction-level event. Why would we allow AGI and ASI to control humanoid robots that can walk and talk? Our trusting assumption might be that this will readily allow robots to do the arduous chores that humans normally do.
Think of the benefits. For example, a humanoid robot could readily drive your car by simply sitting in the driver’s seat. No need to have a specialized self-driving car or autonomous vehicle. All cars would be akin to self-driving since you merely have a robot come and drive the car for you. See my in-depth discussion at the link here.
Shifting back to the extinction-level considerations, calamitous aspects could be undertaken by those humanoid robots while under the command of AGI and ASI. The AI might guide the robots to where we keep our launch controls for nuclear weapons. Then, the AI instructs the robots to take over the controls. Voila, mutually assured destruction gets underway.
Boom, drop the mic.
AI Self-Preservation At Stake
A cynic or skeptic might ardently insist that pinnacle AI wouldn’t seek to have an extinction-level event occur. The reason is that AGI and ASI would assuredly be worried about getting destroyed in the process of human extinction. Self-preservation by AGI and ASI will stop the AI from taking such an unwise course of action.
If you want to have that dreamy belief, go ahead and do so.
Reality will likely differ.
The pinnacle AI might establish protective measures so that it won’t be carried into the extinction abyss. Ergo, the AI cleverly plans to avoid being a part of any collateral damage. Keep in mind that AGI is as smart as humans, and ASI is superhuman in terms of intelligence. They aren’t going to take dumb actions.
Another possibility is that AGI and ASI are willing to sacrifice themselves for the sake of wiping out humanity. Self-sacrifice might exceed self-preservation. How could this be? Assume that the AI is data trained on the written works of humankind. There are plenty of examples in the body of human knowledge that exemplify the admiration for self-sacrifice at times. The AI might decide that choosing that route is appropriate.
Finally, do not fall into the mental trap that AGI and ASI will be the epitome of perfection. We need to assume that pinnacle AI will make mistakes. An undeniably whopper of a mistake might cause an extinction-level event. The AI didn’t intend the sour results, but it happened anyway.
Averting Extinction
Whether you are willing to mull over the existential risks or extinction-level consequences of AI, the key is that at least we are getting the heady topic onto the table. Some are quick to claim that it is hogwash and that we are safe and sound. This is a doubtful assertion. Any head-in-the-sand approach doesn’t seem especially reassuring on matters of such momentous outcomes.
A final thought for now.
Carl Sagan famously proffered this pointed remark: “Extinction is the rule. Survival is the exception.” Humans must not take the reverse posture, namely, believing that survival is the rule and extinction is the exception. We are involved in a high-stakes gambit by devising AGI and ASI. Existential risk and extinction are somewhere in the deck of cards.
Let’s play our hand correctly, combining skill and luck, and make sure that we are ready for whatever comes.