OpenAI Launching Team Preparing For AI’s ‘Catastrophic Risks,’ Like Biological And Nuclear Threats

Topline

OpenAI said Thursday it’s building a team called Preparedness to oversee and evaluate the development of what it calls “frontier artificial intelligence models”—which it classifies as highly capable models with potentially dangerous abilities—to watch out for “catastrophic risks” in categories such as cybersecurity and nuclear threats.

Key Facts

The team, which is part of OpenAI, will be responsible for monitoring the company’s AI models to keep them in line with safety guardrails the company says it is committed to.

Some of the examples it lists in a blog post is the risk AI models can persuade human users through language, and do autonomous tasks.

The company also wants to look out for what some AI experts have referred to as “extinction” level threats like pandemics and nuclear war.

Preparedness will be led by Aleksander Madry, who is the director of MIT’s Center for Deployable Machine Learning, but is now on leave to be at OpenAI.

Another team mission is creating and maintaining what it’s calling a “Risk-Informed Development Policy” that will outline how the company should handle risks posed by AI models as they advance and approach “artificial general intelligence”—or close to human-level knowledge.

OpenAI is also hosting a challenge for people outside of the company to send ideas on possible ways AI can be misused and cause real-world harm.

Key Background

In May, OpenAI leadership including CEO Sam Altman, chief scientist Ilya Sutskever and chief technology officer Mira Murati joined other AI experts in signing a letter prioritizing addressing the risks of advanced AI models. The letter followed another letter in March from AI experts and tech executives, including Elon Musk, who are skeptical of AI’s fast development, and who called for a six-month pause on developing the technology. Altman previously said he’s “particularly worried” that advanced AI models “could be used for large-scale disinformation” in an interview with ABC News in March, but reiterated his belief that developing the technology is important for humanity. While ChatGPT took the world by storm after its release last November by passing exams and writing essays, the chatbot and others like it have been responsible for spreading misinformation and having bizarre conversations.

Further Reading

AI Could Cause Human ‘Extinction,’ Tech Leaders Warn (Forbes)

Elon Musk And Tech Leaders Call For AI ‘Pause’ Over Risks To Humanity (Forbes)

Source: https://www.forbes.com/sites/britneynguyen/2023/10/26/openai-launching-team-preparing-for-ais-catastrophic-risks-like-biological-and-nuclear-threats/