The campaign to push governments to agree on binding international limits to curtail the abuse of AI technology has been escalated to the UN level, as more than 200 leading politicians, scientists, and thought leaders, including 10 Nobel Prize winners, have issued a warning about the risks of the technology.
The statement, released Monday at the opening of the United Nations General Assembly’s High-Level Week, is being called the Global Call for AI Red Lines. It argues that AI’s “current trajectory presents unprecedented dangers” and demands that countries work toward an international agreement on clear, verifiable restrictions by the end of 2026.
Nobel Prize winners lead plea at the U.N.
The plea was revealed by Nobel Peace Prize laureate and journalist Maria Ressa, who used her opening address to urge governments to “prevent universally unacceptable risks” and define what AI should never be allowed to do.
Signatories of the statement include Nobel Prize recipients in chemistry, economics, peace, and physics, alongside celebrated authors such as Stephen Fry and Yuval Noah Harari. Former Irish president Mary Robinson and former Colombian president Juan Manuel Santos, who is also a Nobel Peace Prize winner, lent their names as well.
Geoffrey Hinton and Yoshua Bengio, popularly known as “godfathers of AI” and winners of the Turing Award, which is widely considered the Nobel Prize of computer science, also added their signatures to the statement.
“This is a turning point,” said Harari. “Humans must agree on clear red lines for AI before the technology reshapes society beyond our understanding and destroys the foundations of our humanity.”
Past efforts to raise the alarm about AI have often focused on voluntary commitments by companies and governments. In March 2023, more than 1,000 technology leaders, including Elon Musk, called for a pause on developing powerful AI systems. A few months later, AI executives such as OpenAI’s Sam Altman and Google DeepMind’s Demis Hassabis signed a brief statement equating the existential risks of AI to those of nuclear war and pandemics.
AI stokes fears of existential and societal risks
Just last week, AI was implicated in cases ranging from a teenager’s suicide to reports of its use in manipulating public debate.
The signatories of the call argue that these immediate risks may soon be eclipsed by larger threats. Commentators have warned that advanced AI systems could lead to mass unemployment, engineered pandemics, or systematic human-rights violations if left unchecked.
Some of the items on the embargoed list include banning lethal autonomous weapons, prohibiting self-replicating AI systems, and ensuring AI is never deployed in nuclear warfare.
“It is in our vital common interest to prevent AI from inflicting serious and potentially irreversible damages to humanity, and we should act accordingly.” Ahmet Üzümcü, the former director general of the Organization for the Prohibition of Chemical Weapons, which won the 2013 Nobel Peace Prize under his leadership, said.
More than 60 civil society organizations have signed the letter, including the UK-based think tank Demos and the Beijing Institute of AI Safety and Governance. The effort is being coordinated by three nonprofits: the Center for Human-Compatible AI at the University of California, Berkeley; The Future Society; and the French Center for AI Safety.
Despite recent safety pledges from companies like OpenAI and Anthropic, which have agreed to government testing of models before release, research suggests that firms are fulfilling only about half of their commitments.
“We cannot afford to wait,” Ressa said. “We must act before AI advances beyond our ability to control it.”
Don’t just read crypto news. Understand it. Subscribe to our newsletter. It’s free.
Source: https://www.cryptopolitan.com/nobel-winners-to-curtail-dangerous-ai-uses/