The world is in a frantic race.
Geopolitical powers assert that the winner will take home all the bacon, as it were.
What race is being fiercely waged and strenuously pursued?
It is the AI race.
You could perhaps more aptly refer to this as the race to attain true Artificial Intelligence (AI), currently referred to more fully as Artificial General Intelligence (AGI). We want to somehow arrive at seemingly utmost AI or known as AGI that is comparable to human intelligence. We aren’t there yet. Indeed, despite all kinds of wild and brazenly proclaiming headlines, we do not know when or if we will achieve that high-bar mark. Today’s AI is far less in capabilities than overarching human intelligence, though certainly there are lots of narrower ways in which AI has made impressive forays, such as being able to play top-notch world-class chess or do other relatively constrained tasks.
The golden ring though is the advent of AI that exhibits human intelligence of a devout nature and depth akin to that of humankind. This is the holy grail of AI researchers and practitioners. From time to time, there have been specious claims of already having crossed the AI race finish line, which I’ve debunked in my column at the link here. Those that try to make finish-line crossing contentions are confounding the general public and at times do so by zealous innocence while at other times have seriously questionable motives in hand. All in all, this raises quite significant and vital AI Ethics considerations. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Anyway, there is no doubt that a global-wide sprinting AI race is avidly underway. You would be hard-pressed to claim otherwise.
Think of it this way. If we had already managed to achieve true AI or AGI, the odds are that the AI footrace would have been formally and globally declared as successfully concluded. I assure you that worldwide attention would be riveted on such a resounding and earth-shattering breakthrough. You would know of it. We all would. The AI madcap dash would ergo effectively no longer exist, though perhaps a secondary version might occur involving those that hadn’t attained true AI working feverishly to catch up. There is also the unsettling matter of how we will end up controlling or managing AGI if or when we get there.
No person or entity or nation can as yet properly claim the crown of producing true AI or AGI.
Meanwhile, a tremendous and unrelenting amount of handwringing is taking place about which nation (or nations) are at the head of the pack and who is tailing further behind. The assumption is that if you aren’t first, you will be left in the dirt. You will be eating the scraps leftover from the AI winners. You are potentially going to be forever subjugated to the nation or nations that make the heralded leap into true AI or AGI.
As a quick aside and to ease the wording of this discussion, I am going to henceforth herein use “AGI” whenever I am wanting to invoke the aura of true AI. The use of the somewhat newer phrase “AGI” is sometimes jarring to those that aren’t accustomed to seeing it used. We all are familiar with “AI” and you might be disturbed to see the acronym “AGI” being used instead. Allow me to explain why this is gradually emerging as a verbiage trend.
Part of the reason that AGI has risen in the arena of AI vernacular is that merely stating “AI” has now become a regrettably watered-down phrasing. No one knows if the AI that you are mentioning is the barely-AI variant or some quasi-better making-progress AI infusion, or might be alluding to the someday futuristic fully human-equated AI. To deal with the overloading of “AI” as a catchphrase, the AGI moniker has been gaining preference by those insiders within the AI field that want to specifically and particularly allude to true AI.
So, in short, consider my mentioning of AGI to be the same as saying “true AI” of the robust caliber akin to human intelligence, thanks.
Let’s take a prudent deep breath and mindfully examine some facets of the race to attain AGI. There is even a meta-aspect that needs to be first stated. Be aware that there is a bit of cringy heartburn about using the allegory of a supposed footrace or some other kind of racing activity as a metaphor for the AGI race. Why so? I’ll let you in momentarily on the complications and complexities of why (some say) a footrace or its equivalent is entirely misleading and an insidious simpleton viewpoint.
Here are the key points that I’ll go over with you in this discourse:
- If this is a race, the AGI finish line seems quite ill-defined
- The AGI race might go to a person, an entity, or a nation
- Metrics and how nations are being compared in the AGI race
- Geopolitical maneuvering and alignment for the AGI race
- International AI laws and AI Ethics as referees in the AGI race
You might want to fasten your seatbelt as I examine the urgently proceeding AGI footrace (yes, I dare to call it a footrace) that has nearly everyone moving at breakneck speed and seemingly skyrocketing ahead on this burning quest. Some might say that this is a race for the betterment of humanity, while others are forewarning that the race might spell utter doom for us all.
Time will tell.
If This Is A Race, The AGI Finish Line Seems Quite Ill-Defined
A finish line is usually a rather definitive demarcation. You either have reached the finish line or you have not. Coming up short doesn’t seem to do you much good. Imagine an Olympics 400-meter dash and whether you would especially remember or would heap accolades on the runners that didn’t finish the race at all (never having crossed the finish line). Unlikely.
Will we know when we have reached AGI such that it is reasonably all agreed that the finish line has been achieved?
There are heated disagreements about the demarcation of AGI.
For example, suppose that we devise Artificial Intelligence that seems entirely able to exhibit human intelligence, but there isn’t a semblance of sentience therein per se (see my coverage about the arguments over AI sentience at the link here). The AI is computationally able to mimic or otherwise perform as human intelligence does. There though isn’t a spark of aliveness or sentience that we associate with humans and other living creatures. Does this AGI count as reaching the goal that we thought we had?
Some would counterargue that it wouldn’t matter if sentience per se seemed to be wrapped into this AGI. As long as it could exhibit human intelligence, the incorporation of sentience is something of a differing variety that we might or might not wish to see arise. Sentience in that sense of things is an add-on.
Others vehemently argue that the only means of attaining human intelligence in AGI will be to integrally embody sentience. AGI and sentience are either considered the same, or they are a mixture of an irreducible inseparable dual embodiment. To get AGI, you must have sentience, they would contend.
Setting aside that angle of the debate, another perspective is that we could use the famous Turing Test to assess whether AGI has been achieved. I have covered in-depth the Turing Test at the link here. In brief, the notion consists of having a human ask questions to the AGI and if the human cannot distinguish the AGI-generated replies from those of humankind then we would declare the AGI as being able to exhibit human intelligence.
There are lots of troubles or shortcomings often associated with the Turing Test.
Suppose that the human making the inquiries does a lousy job and fails to ask probing questions. One concern is that many of today’s seeming powerful Large Language Models (LLMs) can parrot back to a human the content that the LLM was trained on (i.e., text and digital media often sourced via large-scale scraping of the Internet). As such, a particularly selected human asking questions of an ordinary sort that have already been answered and exist online can be potentially all readily “answered” by the LLM, but this is debatably not due to any human intelligence embodiment of an AGI caliber.
Lots of other qualms come up. Suppose the human is unable to comprehend the answers. Or suppose the human deludes themselves into believing that the answers are all exhibitory of human intelligence. I’ve even covered the threadbare idea by some that all we need to do is ask the AI if it is AGI or is sentient, which I make clear is not a very convincing form of AGI proof, see the link here.
Finally, as one added thought, do we need to fully reach the finish line to consider that AGI has been reached?
I mentioned earlier that we usually forget about those that don’t reach the finish line. This might not be sensibly analogous to the AGI race. I believe a compelling case can be made that if we are able to attain a substantial way toward AGI, we are already going to be finding ourselves in a state of amazement and either great benefit or great trouble. You see, many important and highly useful outcomes could arise by an almost-there AGI. Coming up short won’t be so problematic as not finishing a 400-meter footrace, especially since it might be a vital foundation for making our way to the fuller version of AGI (possibly being a marathon in comparison).
The metaphor of AGI attainment being a type of footrace or its equivalent is at times an unsatisfactory and inadequate one.
The AGI Race Might Go To A Person, An Entity, Or A Nation
Some harbor the dreamy notion that AGI is going to be attained by some tinkerer working in the garage while in their pajamas and be an outcropping of crazily inventive computerization experiments that they have been toiling away on for years upon years. That is the classic high-tech trope of the lone wolf.
Sorry to report that this is an extremely low odds proposition.
The greater odds would be that an entity such as a business or some research team will be the AGI go-getters. A strong and prevailing belief is that it will take a village to arrive at AGI. The lone wolf won’t have the resources nor the insights by themselves to reach AGI. They might contribute to the quest. They might provide needed pieces to the puzzle. They won’t be capable of garnering the whole kit and caboodle.
Speaking of villages, another strongly held viewpoint is that it will be nations that are only able to attain AGI. Via a combination of the people, businesses, academics, and all other manners of entities within a nation-state, the AGI will arrive as a result of the combined work of the national totality. The unit level of attention for the winner in this race is on the nation-state basis, rather than on something more scattered, free-form or individualistic.
In short, if it takes a village, the village is going to be of a national scale, thus the nation-state will be the designated runner that crosses the finish line in the AGI race.
Metrics And How Nations Are Being Compared In The AGI Race
Take as a given that the AGI achievement is going to be based at the nation-state level.
To reiterate, we don’t know that for sure, but it seems a reasoned assumption.
Consider the ramifications of the nation-state basis. Suppose a lone wolf does manage to get to AGI first and believes that their work is beyond that of the nation-state that they are a member of. This person proclaims they are not of any nation-state in terms of the crafted AGI. Would we still give credit to that nation-state and would the AGI be within the control and purview of that nation?
Envision another alternative that a large multi-national conglomerate arrives at AGI first. Which nation can say that the AGI is their “thing” to use and deploy (will it be construed as property or glean instead a variant of legal personhood)? Perhaps all of the nations that the company exists within are to get equal credit. Or maybe only wherever the formal headquarters is geographically placed. It could be a complex splitting of a veritable pot of gold.
In any case, the general popular opinion is that a nation-state is going to be the crucial determiner of attaining AGI. A nation that encourages AI research and development within its national efforts is going to presumably get to AGI sooner than other nations that don’t do likewise.
There is a predominant view that the AI race is a national one.
A head-scratching issue is how are we to ascertain whether one nation is ahead of or behind another nation in the AGI race.
In a conventional footrace, we could easily identify metrics that can be used to determine which runners are doing well and which ones are not. The speed of the runner can be readily calculated. This doesn’t guarantee that they will finish first, but it at least shows promise. The physical distance between the runners and the distance remaining to the finish line are obviously vital criteria that we can easily measure.
The AGI race doesn’t have those kinds of assured forms of metrics or measurements.
We are using all manner of surrogate measures since there isn’t any definitive way to calculate where the end line is and nor how far we are from it.
Let’s take a look at the types of metrics conventionally being considered. An especially handy source of AI-related global measurements is annually collected and published by the Stanford Institute for Human-Centered AI (HAI) at Stanford University. The report is available online for free and the latest release is entitled The AI Index 2022 Annual Report (based on data collected for 2020-2021). I’ll be sharing with you in a moment some highlights of the national comparisons mentioned in their latest compilation.
Metrics that are being used to gauge national and international progress on AI tend to include a bit of everything, at times bordering on the inclusion of the veritable kitchen sink too.
The types of measures typically examined include:
- Number of AI research articles attributed to a particular nation
- Number of cited references to AI articles of a particular nation
- Number of AI journals based within a particular nation
- Number of AI conferences occurring within a particular nation
- Number of AI conferences sponsored by a particular nation
- Number of AI-related patents granted within a particular nation
- Number of AI company startups within a particular nation
- Number of AI jobs within a particular nation
- Number of new AI jobs or hiring in a particular nation
- Number of AI laws or legislative bills introduced in a particular nation
- Number of AI laws passed or enacted in a particular nation
- Other
Consider these contemporary indications from the HAI AI Index 2022:
- AI Publication Citations: “On the citations of AI repository publications, the United States tops the list with 38.6% of overall citations in 2021, establishing a dominant lead over the European Union plus the United Kingdom (20.1%) and China (16.4%).”
- AI Journals/Conferences: “In 2021, China continued to lead the world in the number of AI journal, conference, and repository publications—63.2% higher than the United States with all three publication types combined. In the meantime, the United States held a dominant lead among major AI powers in the number of conference and repository citations.”
- AI Patents: “China is now filing over half of the world’s AI patents and being granted about 6%, about the same as the European Union plus the United Kingdom. The United States, which files almost all the patents in North America, does so at one-third the rate of China. Compared to the increasing numbers of AI patents applied and granted, China has far greater numbers of patent applications (87,343 in 2021) than those granted (1,407 in 2021).”
- Newly Funded AI Companies: “Investment data by the number of newly funded AI companies in each region. For 2021, the United States led with 299 companies, followed by China with 119, the United Kingdom with 49, and Israel with 28. The gaps between each are significant.”
- AI Hiring Pace: “New Zealand, Hong Kong, Ireland, Luxembourg, and Sweden are the countries or regions with the highest growth in AI hiring from 2016 to 2021.”
- AI Job Postings: “In 2021, California, Texas, New York, and Virginia were states with the highest number of AI job postings in the United States, with California having over 2.35 times the number of postings as Texas, the second greatest. Washington, D.C., had the greatest rate of AI job postings compared to its overall number of job postings
- AI Legislative Action: “An AI Index analysis of legislative records on AI in 25 countries shows that the number of bills containing ‘artificial intelligence’ that were passed into law grew from just 1 in 2016 to 18 in 2021. Spain, the United Kingdom, and the United States passed the highest number of AI-related bills in 2021, with each adopting three.”
We can abundantly admire and appreciate the hard work involved in compiling those nation-state runner statistics for the AGI race.
Skeptics though quibble quite a bit about using any types of metrics in the AGI race engagement.
The thorny question is whether you can draw any kind of straight line from the number of AI articles or AI conferences within a particular nation to the ultimate attainment of AGI. The same is said for the number of AI jobs, the number of AI companies, and the slew of other metrics. It could be that those counts have little to do with attaining AGI. The argument goes that those measures are more heat than light.
The counterargument is that we have to try and measure where we are and where we are going. Putting your head in the sand does not seem like much of a viable way to assess whether we are heading toward AGI or maybe further away from AGI. It is hoped and generally assumed that the more energy and attention toward making advances in AI, the closer we will get to AGI. These metrics are the best that we can do to glean how much energy and attention is being allocated and consumed in the AGI race.
Round and round that goes.
Each of the metrics can by themselves also be batted about the head.
For example, consider the number of legislative laws or bills about AI.
You can claim that if lawmakers are focusing on AI-related laws, this is a good sign that the nation is taking quite soberly the importance of AI and the societal ramifications of where AI is headed. A case can be made that this showcases that a lot of AI advancing effort is arising in that particular nation. Why would you go to the trouble to enact AI laws unless AI was notably burgeoning and bubbling up as a demonstrative element of your nation?
In that manner, those nations promulging new AI laws are interpreted as a telltale sign or signaling of AGI progress is well underway in that nation.
Some critics assert that proposed new laws about AI are going to stifle AI efforts within each such particular nation. The lawmakers and political leaders are going to shoot their own feet. Laws are going to prematurely put a gloomy shadow over AI efforts underway in that specific nation. The AI advancement faucet is going to get jammed up with legal hair clogging and the pace of AGI progress will slow to a trickle in that AI law proclaiming nation. In the meantime, other nations that aren’t passing those kinds of AI laws will continue unabated. It is as though you decided to put lead weights on a runner that is already on the 400-meter track. If you aimed to aid them and speed them up, you’ve done precisely the opposite.
Whoa, the retort goes, the enactment of AI laws is more akin to making sure that there aren’t any unnecessary roadblocks ahead of the runner. The laws provide guidance in the same means that the lines on the racetrack are there to keep the runner smoothly going in the right direction. Without those painted lines, the runners might go amok. New AI laws will keep them striding in unison toward a desirable outcome. Countries that don’t do likewise in terms of new AI laws will find their runners going in every wild direction, including possibly running entirely off the track and harming those that are innocents beyond the AGI race itself.
There are also “hidden” AI-related laws that some are counting and meanwhile some others are not counting as part of these metrics (making for a mishmash when trying to compare counts).
For example, if a nation enacts a law regarding autonomous vehicles such as self-driving cars, do you count this as an AI law? To clarify an autonomous vehicle such as a fully autonomous self-driving car is going to have an AI driving system that is core to driverless capabilities (see my coverage at the link here). Due to the AI involved, any laws about autonomous vehicles could be sensibly argued as essentially AI laws. On the other hand, you might persuasively assert that the law is about the autonomous vehicle and not per se about the AI, therefore this doesn’t count in the AI-specific laws counting.
It is messy.
All of this consternation about metrics might cause you to shrug your shoulders amid the polar opposite views on these weighty considerations. As might be evident, the metrics are nearly always subject to disparate interpretations about what they mean and how the status of a nation regarding AGI is appropriately analyzed.
Geopolitical Maneuvering And Alignment For The AGI Race
Which nations are ahead in the AGI race?
Which nations are falling behind?
The aforementioned metrics attempt to showcase where each of the runners currently is. A basic assumption is that if the metrics do portray an apt indication of AGI seeking positioning, these various pole positions might remain the same over time. Of course, the reality is that national interest and attention can increase or can wane during the bumpy path toward AGI. You might be wisest to expect changes in positioning.
One important consideration is that nations are not really in this race on their own.
Nations are likely to be handing the baton back and forth between each other. The AGI race at times has one or more nations gladly working hand-in-hand. Sometimes this is done warily rather than with friendly glee. In other instances, nations might hold back from each other. At any instant in time, the race posturing can be quite different than it was a few steps back, plus can be quite different a few steps into the future.
Consider this point made by the HAI AI Index 2022 report about cross-country collaborations: “Despite rising geopolitical tensions, the United States and China had the greatest number of cross-country collaborations in AI publications from 2010 to 2021, increasing five times since 2010. The collaboration between the two countries produced 2.7 times more publications than between the United Kingdom and China—the second highest on the list.”
Cynics would say that perhaps the use of cross-collaborations is occasionally done as a ruse. A nation might overtly claim they are cross-collaborating, appearing surface-wise to be doing so, meanwhile deep down they are keeping their best AGI progress a hidden national secret. Maybe this is done to blunt the progress of the cross-collaboration nation. Perhaps this is being done to ensure that the secret sauce is not inadvertently handed out. All kinds of reasons are possible.
In today’s modern digital Internet online world, trying to keep AGI insights tightly under wraps can be a difficult chore. The intense desire to uncover or invent AGI is a compelling allure that can spur individual AGI developers and researchers to openly share their latest work. Nations can find that trying to put a cap on such sharing is a lot harder than it might seem, and likely a lot harder than back in the days when everything was paper-based and required physically moving documents around the globe.
The movement toward open source has certainly been a contemporary emphasis for much of the latest AI and AGI research, as mentioned in the HAI AI Index 2022 report: “Each year, thousands and thousands of AI publications are released in the open source, whether at conferences or on file-sharing websites. Researchers will openly share their findings at conferences; government agencies will fund AI research that ends up in the open-source; and developers use open software libraries, freely available to the public, to produce state-of-the-art AI applications. This openness also contributes to the globally interdependent and interconnected nature of modern AI R&D.”
All in all, nations are generally sharing and yet might only be showing part of their hand. Other nations might not be sharing or only give a pretense of doing so. Some nations struggle mightily with trying to gauge what those within their nation are giving away versus hanging onto. And so on.
I’ve characterized the nature of these national moves in these ways:
- AI Go-it-alone Nation (tries to proceed on its own)
- AI One-way-only Nation (takes in, won’t give out)
- AI Leftover Nation (gets what it can from others)
- AI Open-sharing Nation (prides itself in sharing)
- AI Hollow-sharing Nation (falseness in sharing)
- AI Alliances Nation (trying to make as many alliances as it can)
- AI Not-in-the-game Nation (seeking AGI is not a national priority)
- AI Cheating Nation (reverse engineering or sneakily stealing from other nations)
- Other
A nation can be in more than one of those buckets at a time.
A nation can be in one of those buckets for a while, move out of the bucket, and possibly later go back in.
The nation-state desires and attention pertaining to seeking AGI is a dynamic ebb-and-flow that assuredly will keep on going and be a moving target as to which nation is where in the race, and a constant eye will be needed to figure out where all the players are positioned at a given moment in time.
International AI Laws And AI Ethics As Referees In The AGI Race
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here and the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Let’s consider the impact and vital nature of international AI laws and international proclamations of AI Ethics precepts on the AGI race.
Nations striving toward AGI might do so with abandon and find themselves veering toward some of the oft popularized existential risks of AGI. The hope is that by putting in place international AI laws and international AI Ethics precepts, nations will be guided toward AI For Good and steer clear of AI For Bad.
Per our racetrack analogy, those AI laws and AI Ethics considerations are like trying to keep runners from going outside of the track. There are immense temptations to take shortcuts in the AGI race. Those shortcuts could lead a nation down a seemingly sooner to the finish line path, though simultaneously putting that nation and other nations at undue risk. A subtle but telling example consists of dual-use AI, which I’ve examined at the link here, whereby an AI advancement is readily switched with nary much effort from being aimed at goodness to being wrought with producing cataclysmic badness.
You could assert that the international AI laws and international AI Ethics are like referees or umpires.
An assumption is that these internationally devised legal and ethical mechanisms will keep the AGI race on a more even keel. The thing is, whether particular nations opt to heed the referees or umpires is a different matter. Similarly, there is a vexing question of how those authorities can provide penalties or incentives to keep the runners on the right track. The odds are that nations will do as they wish to do, for which other nations might need to shift their weight to bolster support for the off-the-path rule-breaking that some nations are bound to undertake.
Conclusion
Louis Pasteur, the legendary chemist and microbiologist, famously said this: “Science knows no country, because knowledge belongs to humanity, and is the torch which illuminates the world. Science is the highest personification of the nation because that nation will remain the first which carries the furthest the works of thought and intelligence.”
Can we say that the attainment of AGI knows no country and that AGI will belong to all of humanity?
Or will the nation that first arrives at AGI be possessive of it, becoming drunk with abject power and going power mad?
For those of you that like a bit of a twist on this particular conundrum, consider that AGI might in itself be the type of attainment that is the proverbial snake in the grass. The discoverer of the snake might be the first to get a snakebite. Being first has its risks.
Crossing the finish line on AGI is not necessarily going to be as celebratory and carefree as some might think. Nor will harnessing AGI be necessarily easy. Some might argue that coping with AGI could be nearly impossible since the AGI will seemingly have the comparable deviousness and ingenuity that humankind has. Nations ought to be mindful of what they are trying to attain and what the result will be, doing so beforehand and not getting caught by surprise. They might have a hornet’s nest in their national treasure chest.
As Pasteur proffered: “Fortune favors the prepared mind.”
Source: https://www.forbes.com/sites/lanceeliot/2022/08/15/ai-ethics-and-the-geopolitical-wrestling-match-over-who-will-win-the-race-to-attain-true-ai-or-agi/