Will Artificial Intelligence Save Humanity—Or End It?

In brief

  • An online panel showcased a deep divide between transhumanists and technologists over AGI.
  • Author Eliezer Yudkowsky warned that current “black box” AI systems make extinction an unavoidable outcome.
  • Max More argued that delaying AGI could cost humanity its best chance to defeat aging and prevent long-term catastrophe.

A sharp divide over the future of artificial intelligence played out this week as four prominent technologists and transhumanists debated whether building artificial general intelligence, or AGI, would save humanity or destroy it.

The panel hosted by the nonprofit Humanity+ brought together one of the most vocal AI “Doomers,” Eliezer Yudkowsky, who has called for shutting down advanced AI development, alongside philosopher and futurist Max More, computational neuroscientist Anders Sandberg, and Humanity+ President Emeritus Natasha Vita‑More.

Their discussion revealed fundamental disagreements over whether AGI can be aligned with human survival or whether its creation would make extinction unavoidable.

The “black box” problem

Yudkowsky warned that modern AI systems are fundamentally unsafe because their internal decision-making processes cannot be fully understood or controlled.

“Anything black box is probably going to end up with remarkably similar problems to the current technology,” Yudkowsky warned. He argued that humanity would need to move “very, very far off the current paradigms” before advanced AI could be developed safely.

Artificial general intelligence refers to a form of AI that can reason and learn across a wide range of tasks, rather than being built for a single job like text, image, or video generation. AGI is often associated with the idea of the technological singularity, because reaching that level of intelligence could enable machines to improve themselves faster than humans can keep up.

Yudkowsky pointed to the “paperclip maximizer” analogy popularized by philosopher Nick Bostrom to illustrate the risk. The thought experiment features a hypothetical AI that converts all available matter into paperclips, furthering its fixation on a single objective at the expense of mankind. Adding more objectives, Yudkowsky said, would not meaningfully improve safety.

Referring to the title of his recent book on AI, “If Anyone Builds It, Everyone Dies,” he said, “Our title is not like it might possibly kill you,” Yudkowsky said. “Our title is, if anyone builds it, everyone dies.”

But More challenged the premise that extreme caution offers the safest outcome. He argued that AGI could provide humanity’s best chance to overcome aging and disease.

“Most importantly to me, is AGI could help us to prevent the extinction of every person who’s living due to aging,” More stated. “We’re all dying. We’re heading for a catastrophe, one by one.” He warned that excessive restraint could push governments toward authoritarian controls as the only way to stop AI development worldwide.

Sandberg positioned himself between the two camps, describing himself as “more sanguine” while remaining more cautious than transhumanist optimists. He recounted a personal experience in which he nearly used a large language model to assist with designing a bioweapon, an episode he described as “horrifying.”

“We’re getting to a point where amplifying malicious actors is also going to cause a huge mess,” Sandberg said. Still, he argued that partial or “approximate safety” could be achievable. He rejected the idea that safety must be perfect to be meaningful, suggesting that humans could at least converge on minimal shared values such as survival.

“So if you demand perfect safety, you’re not going to get it. And that sounds very bad from that perspective,” he said. “On the other hand, I think we can actually have approximate safety. That’s good enough.”

Skepticism of alignment

Vita-More criticized the broader alignment debate itself, arguing that the concept assumes a level of consensus that does not exist even among longtime collaborators.

“The alignment notion is a Pollyanna scheme,” she said. “It will never be aligned. I mean, even here, we’re all good people. We’ve known each other for decades, and we’re not aligned.”

She described Yudkowsky’s claim that AGI would inevitably kill everyone as “absolutist thinking” that leaves no room for other outcomes.

“I have a problem with the sweeping statement that everyone dies,” she said. “Approaching this as a futurist and a pragmatic thinker, it leaves no consequence, no alternative, no other scenario. It’s just a blunt assertion, and I wonder whether it reflects a kind of absolutist thinking.”

The discussion included a debate over whether closer integration between humans and machines could mitigate the risk posed by AGI—something Tesla CEO Elon Musk has proposed in the past. Yudkowsky dismissed the idea of merging with AI, comparing it to “trying to merge with your toaster oven.”

Sandberg and Vita-More argued that, as AI systems grow more capable, humans will need to integrate or merge more closely with them to better cope with a post-AGI world.

“This whole discussion is a reality check on who we are as human beings,” Vita-More said.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source: https://decrypt.co/356554/will-artificial-intelligence-save-humanity-end