Topline
Tackling the “risk of extinction” posed by artificial intelligence should be global priority on par with averting catastrophes like nuclear war, a group of tech leaders and experts warned on Tuesday, the latest high-profile call for caution as concern about the potential harms of AI grow and increasingly advanced systems are rolled out.
Key Facts
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the one-sentence statement.
The statement was published by the U.S.-based nonprofit the Center for AI Safety, which works to reduce society-level risks associated with AI by conducting safety research and advocating for safety standards.
The letter has been signed by a host of high-profile executives and experts working in the field, including Sam Altman, Demis Hassabis and Dario Amodei, respectively chief executives for OpenAI—which created the ChatGPT bot—Google DeepMind and Anthropic.
Other signatories include specialists from companies including Google and Microsoft as well as respected computer scientists like Yoshua Bengio and Geoffrey Hinton, both considered pioneers in the field whose work has made some of the applications being rolled out today possible.
Executives from Meta, a strong player in AI and the parent company of Facebook, WhatsApp and Instagram, have not signed the statement, the Center for AI Safety said.
“Mitigating the risk of extinction from AI will require global action” similar to that used to mitigate the risk of nuclear war, said Dan Hendrycks, Director of the Center for AI Safety, stressing that a similar level of effort and coordination will be needed to properly tackle the future risks of AI.
Crucial Quote
“It’s not too early to put guardrails in place and set up institutions so that AI risks don’t catch us off guard,” said Hendrycks in a statement accompanying the 22-word warning. Hendrycks said increasing societal concern about the potential impacts of AI is reminiscent of what happened in the early days of nuclear power and that “we need to be having the conversations that nuclear scientists were having before the creation of the atomic bomb.” While it’s important that we address the pressing issues AI systems already pose—including being used to spread misinformation or eliminate millions of jobs—Hendrycks said it is important ”the AI industry and governments around the world… seriously confront the risk that future AIs could pose a threat to human existence.”
News Peg
The terse statement is the latest of several high-profile warnings from leaders across civil society, academia and industry to warn about the potential risks AI poses, including a letter signed by the likes of Elon Musk and Steve Wozniak calling for a six-month pause to take stock of the risks posed by AI. Though many signatories have spoken about the existential risk from AI for many years—including Skype co-founder Jaan Tallinn and acclaimed British physicist Martin Rees—the widespread rollout of generative AI systems like ChatGPT have added urgency to the debate, particularly as companies race to develop, build and deploy better systems faster than their rivals.
Contra
Few familiar with the field will claim there are no risks associated with AI and many, including the risk of exacerbating existing biases and inequalities, promoting misinformation, disrupting politics and upending the economy and job market are already being felt. However, there is less agreement on whether AI systems will one day come to threaten the very survival of humanity. Such threats, known as existential risks, encompass a broad range that covers more tangible threats like nuclear war, asteroid impacts, climate change, and pandemics through to more esoteric things like an attack from aliens or runaway nanotechnology. Those in favor of classifying AI as an existential risk vary in the detail they provide for their reasoning and a great deal is, necessarily, speculative. Others claim such a position is irresponsible and effectively ignores very real current problems at the expense of a speculative future problem. While AI systems are growing in their capabilities, many point to their general failure to accomplish even simple tasks as a reason to discount AI as an existential threat.
Further Reading
Elon Musk And Tech Leaders Call For AI ‘Pause’ Over Risks To Humanity (Forbes)
IBM Will Stop Hiring Humans For Jobs AI Can Do, Report Says (Forbes)
Source: https://www.forbes.com/sites/roberthart/2023/05/30/ai-could-cause-human-extinction-tech-leaders-warn/