Earlier this week, OpenAI—creator of the popular internet chatbot, ChatGPT—put out a blog post titled, “Governance of superintelligence.” The post apparently aimed to spell out a delineation between how the company thinks public policy should respond to run-of-the-mill AIs, such as online chatbots and image generators, versus “superintelligent” AIs, which are the yet-to-be-created AI systems of the future that will potentially exceed expert skill levels across most domains of human activity.
According to the authors of the post—one of whom is OpenAI CEO Sam Altman—AI models “below a significant capacity threshold” should be allowed to develop absent “burdensome mechanisms like licenses or audits.” However, for more powerful systems, the authors envision a different kind of regulatory regime, calling for “an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.”
While the authors undoubtedly mean well by their advice, their recommendations do not follow from their premises. Rather, they put the metaphorical cart before the horse, jumping to a foregone conclusion—in this case, the creation of an international AI agency modeled after the International Atomic Energy Agency—without offering the evidence needed to support how they reach it.
Indeed, when listening to “AI doomers”—those who hypothesize advanced AI systems might lead to the end of the world—they come across as novices with respect to public policy matters. These are not stupid people, to be clear. Many of them are top-notch software engineers or other experts in their respective fields. But they seem to lack basic understanding of what a reasonable basis for public policy interventions looks like.
Consider this: A long-standing, bipartisan consensus exists regarding the evidence that should be in place before regulatory interventions are undertaken. These principles are perhaps best exemplified by presidential Executive Order 12866, which was signed by Bill Clinton in 1993 and remains in effect. Each subsequent president—including President Biden—has reaffirmed the Order and by extension “the principles of regulation” contained in it.
Principle number one of the Order—and it is number one for a reason—is “Each agency shall identify the problem that it intends to address… as well as assess the significance of that problem.” Mere armchair theorizing about how an artificial intelligence might hypothetically take over the world and destroy the human race is not evidence a real problem exists. Yet, to-date, this is the primary proof we’ve been offered from the doomers.
This should not surprise us. Even federal regulatory agencies routinely speculate that an externality or other market failure exists without offering any evidence to demonstrate the problem is real. Nevertheless, the lack of evidence should raise questions about whether the doomers’ concerns should be taken seriously.
Nor is it the case that those who are skeptical that AI creates significant existential risks are akin to modern day “climate deniers.” Most people involved in the policy debate today accept that there is anthropogenic warming of the atmosphere. In other words, it is clear now that a problem exists, and the debate has moved on to whether the costs of mitigation efforts justify the benefits.
Compare this with claims made about AI today, where there is often no evidence at all brought to the table to demonstrate the existence of a problem. Instead, what we hear are hysterical claims like “literally everyone on Earth will die.” Meanwhile, the same people making these claims can’t formulate a coherent argument to explain their reasoning.
It is also important to distinguish between existential risks posed by AI—those which threaten all of humanity—and more ordinary risks. It is almost certainly the case that AI threatens certain aspects of our privacy (through, for example, facial recognition technology), our security (through the use of autonomous drones and other advanced weapons systems), and the quality of our information (through “deep fakes” and other forms of disinformation).
Thus, while there are likely legitimate reasons to be concerned about certain aspects of AI, and even to potentially regulate AI in some domains, these are typically not the issue areas about which the AI doomers are most concerned. Rather, theirs is a more general kind of concern about superintelligent systems producing unintended consequences that spiral out of control.
Altman, his colleagues at OpenAI, and others like them seem legitimately worried, but they don’t seem to be thinking clearly. They aren’t applying a rational, evidence-based lens to how they formulate their public policy prescriptions. Rather than explain how logic would lead a reasonable person to support their extreme policy conclusions, they instead jump to the conclusion without detailing the road they took to get there.
Importantly, this does not mean we need to wait for doomsday to occur before regulating superintelligent AI, if that’s indeed what’s needed. There will almost certainly be worrying signs between now and “the singularity” moment they fear. Moreover, there are very few people even studying this issue at present. A far better alternative to Luddite regulation is to get smart minds working on the topic and to see what they find. This is already starting to happen at OpenAI and other Silicon Valley tech companies, as well as in the public arena through reporting and online discussion.
Currently, the AI doomers don’t even have much anecdotal evidence on their side, let alone widespread confirmation that the existential threat they worry about is real. Until they can come up with more than rampant speculation and bombastic, unsupported claims, the public will continue—for good reason—to roll their eyes at them. To reiterate, these people aren’t fools. They just have no understanding of what a rational regulatory regime entails.
Source: https://www.forbes.com/sites/jamesbroughel/2023/05/24/the-first-rule-of-regulating-ai-is-to-demonstrate-a-problem-exists/