In response to OpenAI CEO Sam Altman’s recent Congressional testimony, a heightened national conversation is taking place surrounding the potential existential risks stemming from artificial intelligence. Although he championed the merits of AI and the great benefits it can provide to humanity, Altman also vocalized his fears that the technology industry could “cause significant harm to the world,” even going as far as endorsing a new federal agency to regulate AI and calling for licensing of AI firms. While his concerns merit attention, it’s essential to contextualize them against what we actually know about AI and existential risks, as opposed to what is mere speculation.
One notable voice sounding alarm over AI risks is Eliezer Yudkowsky, who has made the extraordinary claim that the “most likely result of building a superhumanly smart AI… is that literally everyone on Earth will die.” Superintelligent AI is usually thought to mean something along the lines of an AI system that surpasses the intelligence and capabilities of the smartest human beings in nearly every field.
Yet, in a recent podcast with economist Russell Roberts, Yudkowsky was unable to provide any coherent mechanism behind his outrageous claim. In other words, he couldn’t offer any plain English description of how the world goes from chatbots answering questions on the internet to literally the end of the human race. Even digging through the arguments raised by more clear-thinking AI pessimists, one can actually extract reasons for optimism rather than foreboding.
An illustrative example lies in the concept of “instrumental convergence,” which is the idea that there are intermediate goals that an AI might set for itself on the way to achieving some terminal goal that humans program for it. For example, an AI tasked with producing widgets might decide that accumulating money is the most effective strategy to accomplish this end, since money allows it to buy factories, hire workers, and so on. This notion suggests there may be some general tendency for superintelligent AIs to converge on similar strategies even if AIs overall have a diverse array of final goals.
Critics of AI frequently use the example of AIs striving to amass significant wealth or resources, and—by assumption—they regard this as detrimental. However, this perspective could be skewed by an underlying antagonism toward capitalism and wealth creation, both of which have historically fueled human progress.
If a superintelligent AI sought to maximize its wealth and was also programmed with reasonable restrictions such as “act within the confines of existing law,” and “accumulate resources only by satisfying consumer or investor needs,” it’s unclear why an AI accumulating resources should cause any alarm.
Capitalism has long faced criticism about a general tendency toward monopolization, so maybe the real concern here is monopoly. However, apart from a few exceptions, competition has largely dominated over monopoly in capitalist economies. Already, we’re witnessing significant competition in the AI space, and there is little reason to think this won’t continue.
Thus, even if superintelligent AIs aspire to acquire as many resources as they can, so long as they operate within legal boundaries (modifiable by humans as circumstances require) and aim to satisfy consumer and investor demands, their operation might largely parallel traditional business activities in a market.
While this scenario might not sit well with communists and socialists, for those of us who appreciate the benefits of markets, production, and businesses competing to satisfy consumer demands, superintelligent AIs could well be a catalyst for economic growth rather than a harbinger of the apocalypse.
So the question arises: why create unnecessary new bureaucratic structures and licensing regimes if the little we know so far about AI development provides so little reason to worry about existential threats? According to Eliezer Yudkowsky, the onus is on those who are skeptical of AI posing existential risk to disprove his theory. However, the burden of proof works in the opposite direction. Unsupported claims of potential dangers, especially sensational ones, require substantiation before gaining credibility.
Notwithstanding the attention-grabbing headlines, a deeper dive into the AI doomsday narratives suggests little cause for concern. Rather than descending into pandemonium, the innate tendencies of superintelligent AIs may lead to convergence on practices beneficial to humanity. While ensuring the ethical and safe development of AI deployment is crucial—and already happening to a significant extent—overly restrictive regulations will hamper the technology’s potential to drive economic growth and societal progress.
In short, the burden of proof lies with the doomsayers. Their claims that we are all going to hell in a handbasket should be met with healthy skepticism until solid evidence proves otherwise. All in all, based on what we know about AI and existential risk, we have more reasons for optimism than pessimism.
Source: https://www.forbes.com/sites/jamesbroughel/2023/05/18/why-the-existential-threat-of-ai-may-be-overblown/