The United Nations releases an important report on AGI and emphasizes four key recommendations to … More
In today’s column, I examine a recently released high-priority report by the United Nations that emphasizes what must be done to prepare for the advent of artificial general intelligence (AGI).
Be aware that the United Nations has had an ongoing interest in how AI is advancing and what kinds of international multilateral arrangements and collaborations ought to be taking place (see my coverage at the link here). The distinctive element of this latest report is that the focus right now needs to be on our reaching AGI, a pinnacle type of AI. Many in the AI community assert that we are already nearing the cusp of AGI and, in turn, we will soon thereafter arrive at artificial superintelligence (ASI).
For the sake of humanity and global survival, the U.N. seeks to have a say in the governance and control of AGI and ultimately ASI.
Let’s talk about it.
This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
Heading Toward AGI And ASI
First, some fundamentals are required to set the stage for this weighty discussion.
There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).
AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.
We have not yet attained AGI.
In fact, it is unknown whether we will reach AGI, or whether AGI may be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.
United Nations Is Into AI And AGI
I’ve previously explored numerous U.N. efforts regarding where AI is heading and how society should best utilize advanced AI.
For example, I extensively laid out the ways that the U.N. recommends that AI be leveraged to attain the vaunted Sustainability Development Goals (SDGs), see the link here. Another important document by the U.N. is the UNESCO-led agreement on the ethics of AI, which was the first-ever global consensus involving 193 countries on the suitable use of advanced AI (see my analysis at the link here)
The latest notable report is entitled “Governance of the Transition to Artificial General Intelligence (AGI): Urgent Considerations for the UN General Assembly” and was prepared and submitted to the Council of Presidents of the United Nations General Assembly (UNCPGA).
Here are some key points in that report (excerpts):
- “AI systems are rapidly advancing towards artificial general intelligence (AGI), characterized by systems capable of equaling or surpassing human intelligence in diverse cognitive tasks.”
- “Unlike traditional AI, AGI could autonomously execute harmful actions beyond human oversight, resulting in irreversible impacts, threats from advanced weapon systems, and vulnerabilities in critical infrastructures.”
- “To effectively address these global challenges, immediate and coordinated international action supported by the United Nations is essential.”
- “Without proactive global management, competition among nations and corporations will accelerate risky AGI development, undermine security protocols, and exacerbate geopolitical tensions.”
- “Coordinated international action can prevent these outcomes, promoting secure AGI development and usage, equitable distribution of benefits, and global stability.”
The bottom line is that a strong case can be made that if AGI is allowed to be let loose and insufficiently overseen, society is going to be at grave risk. A question arises as to how the nations of the world can unite to try and mitigate that risk. Aptly, the United Nations believes they are the appropriate body to take on that challenge.
UN Given Four Big Asks
What does the U.N. report say about urgently needed steps regarding coping with the advent of AGI?
These four crucial recommendations are stridently called for:
- (1) Establish a global AGI Observatory.
- (2) Craft a set of international best practices and certifications for secure and trustworthy AGI.
- (3) Convene a special Framework Convention on AGI.
- (4) Potentially create a new UN agency entirely devoted to the AGI advent.
Those recommendations will be considered by the Council of Presidents of the United Nations General Assembly.
By and large, enacting one or more of those recommendations would indubitably involve some form of U.N. General Assembly resolutions and would undoubtedly need to be integrated into other AI initiatives of the United Nations. It is possible that none of the recommendations will proceed. Likewise, the recommendations might be revised or reconstructed and employed in other ways.
I’ll keep you posted as the valued matter progresses.
Meanwhile, let’s do a bit of unpacking on those four recommendations. I will do so, one by one, and then provide a provocative or perhaps engaging conclusion.
Global AI Observatory
The first of the four recommendations entails establishing a global AGI Observatory that would keep track of what’s happening with AGI. Think of this as a specialized online repository that would serve as a curated source of information about AGI.
I agree that this would potentially be immensely helpful to the U.N. Member States, along with being useful for the public at large.
You see, the problem right now is that there is a tremendous amount of misinformation and disinformation concerning AGI that is being spread around, often wildly hyping or at times undervaluing the advent of AGI and ASI. Assuming that the AGI Observatory was properly devised and suitably careful in what is collected and shared, having a source about AGI that is reliable and balanced would be quite useful.
One potential criticism of such an AGI Observatory would be that it is perhaps duplicative of other similar commercial or national collections about AGI. Another qualm would be if the AGI Observatory were allowed to be biased, it would misleadingly carry the aura of something balanced, yet would actually be tilted in a directed way.
Best Practices And Certification For AGI
The second recommendation requests that a set of AGI best practices be crafted. This would aid nations in understanding what kind of governance structures ought to be considered for sensibly overseeing AGI in their respective country. It could spur nations to proceed on a level playing field basis. Furthermore, it reduces the proverbial reinventing of the wheel, namely that the nations could simply adopt or adapt an already presented set of AGI best practices.
No need to write such stipulations from scratch.
On a similar vein, the setting up of certifications for AGI would be well-aligned with the AGI best practices. AI makers and countries as a whole would hopefully prize being certified as to their AGI and its conformance to vital standards.
A criticism on this front is that if the U.N. does not make the use of best practices a compulsory aspect, and likewise if the AGI certification is merely optional, few if any countries would go to the trouble of adopting them. In that sense, the whole contrivance is mainly window dressing and not a feet-to-the-fire consideration.
U.N. Framework Convention
In the parlance of the United Nations, it is somewhat expected to call for a Framework Convention on significant topics.
Since AGI is abundantly a significant topic, here’s a snapshot excerpt of what is proposed in the report: “A Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development.”
The usual criticism of those kinds of activities is that they can become a bureaucratic nightmare that doesn’t produce much of anything substantive. Also, they might stretch out and be a lengthy affair. This is especially disconcerting in this instance if you believe that AGI is on the near horizon.
Formulate U.N. AGI Agency
The fourth recommendation indicates that a feasibility study be undertaken to assess whether a new U.N. agency ought to be set up. This would be a specialized U.N. agency devoted to the topic of AGI. The report stresses that this would need to be quickly explored, approved, and set in motion on an expedited basis.
An analogous type of agency or entity would be the International Atomic Energy Agency (IAEA). You probably know that the IAEA seeks to guide the world toward peaceful uses of nuclear energy. It has a founding treaty that provides self-guidance within the IAEA. Overall, the IAEA reports to the U.N. General Assembly and the U.N. Security Council.
A criticism of putting forward an AGI Agency by the United Nations is that it might get bogged down in international squabbling. There is also a possibility that it would be an inhibitor to the creative use of AGI rather than merely serving as a risk-reducing guide. To clarify, there are some that argue against too many regulating and overseeing bodies since this might undercut innovative uses of AGI.
We might inadvertently turn AGI into something a lot less impressive and valuable than we had earlier hoped for. Sad face.
Taking Action Versus Sitting Around
Do you think that we should be taking overt governance action about AGI, such as the recommendations articulated in the U.N. AGI report?
Some would say that yes, we must act immediately. Others would suggest we take our sweet time. Better to get things right than rush them along. Still others might say there isn’t any need to do anything at all. Just wait and see.
As food for thought on that thorny conundrum, here’s a memorable quote by Albert Einstein: “The world will not be destroyed by those who do evil, but by those who watch them without doing anything.” Mull that over and then make your decision on what we should do next about AGI and global governance issues.
The fate of humanity is likely on the line.
Source: https://www.forbes.com/sites/lanceeliot/2025/07/12/united-nations-considering-these-four-crucial-actions-to-save-the-world-from-dire-agi-and-killer-ai-superintelligence/