Are you a global cooperative optimist or pessimist?
I ask because there is a seemingly significant topic that might warrant an all-out global multilateral arrangement, though whether this can be successfully pulled off is in serious doubt. Indeed, some believe that trying to attain any such global cooperative stipulation is unnecessary, futile, and won’t move the needle anyway (those are the pessimists). The counterargument is that we have to try else gloomy and apocalyptic results confront us squarely and frighteningly so (those are the optimists, ardently believing we can do something positive to stop an anticipated Armageddon from happening).
I’m referring to the looming matter of Artificial Intelligence (AI).
Yes, the shadow being cast by AI that discriminates, exhibits undue biases, and potentially can produce global calamities including the proclaimed existential risk of humankind being wiped clean off the planet are notable prompters for a global multilateral solution. But can the world unite to cope with the advent of AI?
Keep that vital question at the top of your mind, we’ll be unpacking it shortly herein. For those of you overall interested in AI, you might find informative my ongoing and extensive coverage of AI Ethics and AI Law at the link here and the link here, just to name a few.
You might vaguely be aware that Elon Musk for example has repeatedly decried the dangers of AI and that we are losing time to confront unimaginably horrible outcomes. One aspect that he has clamored for includes new laws and regulations that will deal with the rising tide of AI that veers into the monstrous territory. For my coverage of AI as an existential risk, see the link here. For my analysis of the ways in which Elon Musk seems to make predictions and carries out his endeavors, see the link here.
A recent tweet by Elon Musk to his nearly 120 million followers said this: “There is no regulatory oversight of AI, which is a *major* problem. I’ve been calling for AI safety regulation for over a decade!” (tweet dated December 1, 2022).
There are individual nations that are in fact seeking to establish new laws regarding AI. The United States has what some would categorize as a “unilateral” (one nation) focus via the proposed Algorithmic Accountability Act that continues to languish in the halls of Congress. Most don’t expect much forward progress in the near term on this draft bill given the polarization and split viewpoints amidst and within the legislative and executive branches.
You can also explore some bilateral AI-related regulatory efforts involving various sets of two countries seeking to settle on some mutually agreeable approaches to AI. The perhaps most notable multilateral effort would be the EU as it undertakes the proposed Artificial Intelligence Act (AIA) and continues to have reviews and feedback as to the viability of such regulation. For my analysis on this, see the link here.
I’d like to walk you through some of the challenges underlying global multilateral arrangements. In short, they face an uphill battle. They tend to be extraordinarily hard to bring to fruition. Once they get put in place, keeping them active and emboldened is an ever-present exercise.
Of course, you could certainly argue that anything worth doing is likely going to be a rocky road. There are no free rides toward achieving global cooperation. A realist recognizes that getting disparate perspectives to somehow align across the boundaries of nation-states is inherently problematic. One would seemingly be foolish to think otherwise.
Hope springs eternal.
Look for example at the International Space Station (ISS). This is a showcase of global cooperation on a multilateral basis. It is enduring (the initial launch was in 1998). Sure, various spats or bubbling issues come up from time to time, and no such arrangement is perfect or ideal, but you can look up at the sky and see that success can be had overall. We, as a society, are learning about long-term space exploration each and every day.
Another showcase could be CERN. You likely know that the European Organization for Nuclear Research has been around since the 1950s. Still humming along. Have there been hiccups? Naturally. Nonetheless, you would be hard-pressed to understate the importance and value that this intergovernmental entity has achieved in the sciences and especially large particle physics.
You might be wondering why AI should somehow get into the vaunted realm of such crucial aspects comparable to long-term space exploration or seeking to discover the true nature of atomic matter. We can’t just toss any old topic into the global multilateral conundrum. With all the angst and energy consumed by trying to craft and maintain a global cooperative, there had better be a darned good reason to pursue the arduous and heady path.
Normally, these kinds of thorny parameters enter into the picture:
- Must be of vital importance on a widespread basis beyond any single country alone
- A large-scale issue that has an anticipated massive interconnected societal impact
- Something that cannot be readily solved and seems puzzling at the get-go
- Requires a long-term commitment and is not fly-by-night or hastily dealt with
- Stokes compelling interests that go beyond national borders
- Often inspirational as to extend beyond the reach of today’s humanity
- Either has humongous upsides and/or equally disastrously destructive downsides
- Can potentially bring together friends and frenemies alike on a commonality journey
- Without cooperation, the chances of success are perceived as abundantly lessened
- Costs are enormous and thus the best chance entails a cost-sharing approach
- Stokes the classic FOMO (fear of missing out) as a means of garnering international attention
- Other
Not all global multilateral initiatives necessarily meet those aforementioned criteria. You can mix and match from the list. The gist is that the focus of a highly prized and noteworthy global multilateral arrangement is nearly always something big. Something really big.
Take a moment to noodle on these three rather striking questions:
- Does AI merit the esteemed hallmark for attaining a global multilateral arrangement?
- If so, are there identifiable contours of what such an AI-pertinent arrangement might look like?
- And does this all encompass or invoke AI Ethics and AI Law?
I’m glad that you asked.
Before diving deeply into the topic, I’d like to first lay some essential foundation about AI and particularly AI Ethics and AI Law, doing so to make sure that the discussion will be contextually sensible.
The Rising Awareness Of Ethical AI And Also AI Law
The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.
I want to make abundantly sure that we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
Be very careful of anthropomorphizing today’s AI.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
Not good.
All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.
Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.
Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.
In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.
Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.
All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.
In the AI Bill of Rights, there are five keystone categories:
- Safe and effective systems
- Algorithmic discrimination protections
- Data privacy
- Notice and explanation
- Human alternatives, consideration, and fallback
I’ve carefully reviewed those precepts, see the link here.
Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of exploring the possibilities of global cooperative multilateral arrangements regarding AI.
Moving AI Toward Global Cooperative Multilateral Arrangements
Let’s revisit my earlier postulated questions on this topic:
- Does AI merit the esteemed hallmark for attaining a global multilateral arrangement?
- If so, are there identifiable contours of what such an AI-pertinent arrangement might look like?
- And does this all encompass or invoke AI Ethics and AI Law?
Thanks for waiting patiently to get an answer to those provocative queries.
I’ll say this if you are looking for an easy answer, your best bet is that each of those questions gets a resounding Yes. Namely, yes, AI does merit global multilateral arrangements. Yes, there are identifiable contours for this. Yes, all of this is encompassed by crucial AI Ethics and AI Law considerations.
That being said, there are arguments in outright opposition to AI-related global multilateral arrangements (we’ll cover those in a bit). Thus, some would exhort that the answer is No rather than Yes when considering such initiatives.
There is also a lot of heated debate over what the contours of any such AI initiatives ought to look like and the deliberations can get messy and divisive. In terms of AI Ethics and AI Law, there are some that insist those topics don’t belong in any purely “technological” AI initiatives and try to shove Ethical AI and AI Law into a tiny corner of their own.
Sigh.
My point is that the “Yes” answer is fuzzy, and the reality is much more complex than any simple conclusive assertions. You could proffer that the answer is all No’s. Perhaps the middle ground answer is Maybe.
There are two studies regarding the notion of AI global multilateral possibilities that I’ll next use as a strawman mechanism for further exploring the topic overall.
Both studies are via the Brookings Institution. One is dated October 2021 and is entitled “Strengthening International Cooperation on AI” by co-authors Cameron F. Kerry, Joshua P. Meltzer, Andrea Renda, Alex C. Engler, and Rosanna Fanni. The other recently published one is entitled “Global AI Cooperation On The Ground: AI Research And Development On A Global Scale” dated October 2022 and co-authored by Cameron F. Kerry, Joshua P. Meltzer, and Andrea Renda. I’ll refer to them distinctively as Study 1 (October 2021) and Study 2 (October 2022).
These studies and similar ones are considered broadly in the arena of Responsible AI.
The general notion is that we want AI that abides by proper and desirable human values. Some refer to this as Responsible AI. Others similarly discuss Accountable AI, Trustworthy AI, and AI Alignment, all of which touch upon the same cornerstone principle. For my discussion on these important issues, see the link here and the link here, just to name a few.
How can we get AI to align with human values?
One potential approach involves doing so on a global multilateral scale. The logic is straightforward. If we can all agree on how to ensure that AI is aligned with human values, we are globally safer accordingly. If we don’t do this, the implication is that some nations will have AI that is misaligned with human values, while others will perhaps be aligned. Those nations that have misaligned AI will inevitably and inexorably leak into other nations. AI is something that cuts across borders. You can’t particularly cage AI and keep it tightly bound in only one particular nation (see my discussion on AI containment at the link here).
You can persuasively argue that it is an all-or-nothing dilemma. The weakest link in the chain, such as a nation that opts to not align its AI with human values, will assuredly end up infecting the rest of the world with the misaligned AI. It will spread, globally. The hope is to try and curtail such worldwide calamities.
A realist would say that you’ll never get all nations to abide by some form of AI alignment. Okay, they have a valid point. But those that then conclude that you might not as well try, well, that’s the proverbial tossing the baby out with the bathwater (an old saying, perhaps worth retiring). We can at least seek to get as much AI alignment as we can achieve. This presumably will help narrow down the misaligned AI and make things somewhat more tenable in trying to cope with the remainder rather than the bulk of AI.
I believe that provides a handy segue into the two studies that I mentioned.
First, Study 1 mentions that the Brooking Institution and the Centre for European Policy Studies got together a few years ago and formed the Forum for Cooperation on AI (FCAI): “In 2019, The Brookings Institution and the Centre for European Policy Studies (CEPS) saw a need for deeper exploration of international cooperation in AI development and policymaking and established the Forum for Cooperation on AI (FCAI), a high-level exchange among government officials and leading experts from academia, the private sector, and civil society.”
As further delineated in Study 2, the FCAI especially explores the pursuit of Responsible AI: “The Forum for Cooperation on Artificial Intelligence (FCAI) has investigated opportunities and obstacles for international cooperation to foster the development of responsible artificial intelligence (AI). It has brought together officials from seven governments (Australia, Canada, the European Union, Japan, Singapore, the United Kingdom, and the United States with experts from industry, academia, and civil society to explore similarities and differences in national policies on AI, avenues of international cooperation, ecosystems of AI research and development (R&D), and AI standards development among other issues.”
In Study 2, the primary attention went toward joint possibilities of AI multilateral R&D (research and development) activities, especially urging that AI related to climate change and AI related to privacy-enhancing technologies be given initial top priority: “FCAI convened a roundtable on February 10, 2022, to explore specific use cases that may be candidates for joint international research and development and inform selection and design of such projects based on criteria outlined below. Potential areas considered were climate change, public health, privacy-enhancing technologies for sharing data, and improved tracking of economic growth and performance (economic measurement).”
For our interests in this discourse herein, I’ll be mainly looking at Study 1 henceforth.
Study 1 had this handy description of the global pursuit of Responsible AI:
- “The pursuit of responsible AI—AI that is ethical, trustworthy, and reliable—is increasingly central to many governments’ AI policy, a focus for AI research and development, and a concern for civil society eager to maximize the opportunities of AI while mitigating its risks. These issues transcend national boundaries” (Study 1).
You’ll note that a key element is that AI transcends national boundaries. This gets back to my earlier listed set of parameters or factors associated with topics that are deserving of global multilateral arrangements. In the case of AI, there is really no argument that AI cuts across nation-state boundaries. With the vastly interconnected electronic and digital world we live in today, AI can get to just about any corner of the planet (unless, I suppose, you live in a cave and have absolutely no Internet connection and have managed to avoid the online onslaught).
One of my pointed questions was whether AI rises to the level of warranting various global multilateral arrangements. I claimed that the answer is Yes.
Study 1 provides a helpful indication of the justification for my contention:
- “International cooperation is key to realizing the benefits of AI and addressing its risks. On one hand, no one country acting alone can make ethical AI pervasive, leverage the scale of resources needed to realize the full benefits of AI innovation, and ensure that the advances from developing AI systems can be made available to users in all countries in an open and non-discriminatory trading system. On the other hand, the opportunity cost of insufficient international cooperation is further exacerbated by the prospect of uncoordinated regulatory interventions that would limit opportunities for R&D, create costs to AI use and investment, and undermine the capacity for FCAI-participating governments to establish a system of AI governance built on democratic principles and respect for human rights” (Study 1).
A listed indication in Study 1 that provides a set of seven solid reasons or justifications for AI global multilateral arrangements provides an easy-reading overview (quoted from Study 1):
- 1) AI research and development is an increasingly complex and resource-intensive endeavor, in which scale is an important advantage.
- 2) International cooperation based on commonly agreed democratic principles for responsible AI can help focus on responsible AI development and build trust.
- 3) When it comes to regulation, divergent approaches can create barriers to innovation and diffusion.
- 4) Aligning key aspects of AI regulation can enable specialized firms in AI development to thrive.
- 5) Enhanced cooperation in trade is essential to avoid unjustified restrictions on the flow of goods and data, which would substantially reduce the prospective benefits of AI diffusion.
- 6) Enhanced cooperation is needed to tap the potential of AI solutions to address global challenges.
- 7) Cooperation among like-minded countries is important to reaffirm key principles of openness and protection of democracy, freedom of expression, and other human rights.
Recall that I had earlier mentioned that there are AI-related international efforts already underway and I cited for example the UNESCO set of AI Ethics guidelines. There are many such global activities related to AI that is taking place.
As sketched out in Study 1, included are:
- “At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development” (Study 1).
You might be thinking that the whole question of whether to carry out AI global multilateral arrangements has therefore been decided. It is happening, you might exhort.
Problem solved.
Move on.
Whoa, hold your horses.
We have only scratched the surface of the myriads of AI issues and considerations that need to be dealt with. Furthermore, none of the existing efforts rise to the magnitude and scale of an International Space Station or a CERN. They are pretty much relatively smaller in size, tend to be somewhat fragmented from each other, and lack a cohesive, comprehensive, fully robust, and well-funded capacity.
I want to emphasize that they are laudable efforts that typically persist on shoestring budgets and get little if any limelight or global appreciation for what they are doing. It can be a behind-the-scenes tireless endeavor that is like pushing a mighty boulder up a mountain. I serve on various such committees and standards bodies in AI, and I can firsthand report to you that the volunteers and participants are performing a Sisyphean task.
Kudos to them all.
In Study 1, there were fifteen recommendations made to try and direct attention toward formulating large-scale comprehensive global multilateral initiatives associated with AI (quoting from Study 1):
- R1. Commit to considering international cooperation in drafting and implementing national AI policies.
- R2. Refine a common approach to responsible AI development.
- R3. Agree on a common, technology-neutral definition of AI systems.
- R4. Agree on the contours of a risk-based approach.
- R5. Establish “redlines” in developing and deploying AI.
- R6. Strengthen sectoral cooperation, starting with more developed policy domains.
- R7. Create a joint platform for regulatory learning and experiments.
- R8. Step up cooperation and exchange of practices on the use of AI in government.
- R9. Step up cooperation on accountability.
- R10. Assess the impact of AI on international data governance.
- R11. Adopt a stepwise, inclusive approach to international AI standardization.
- R12. Develop a coordinated approach to AI standards development that encourages Chinese participation consistent with an industry-led, research-driven approach.
- R13. Expand trade rules for AI standards.
- R14. Increase funding for participation in SDOs.
- R15. Develop common criteria and governance arrangements for international large-scale R&D projects.
I don’t have the space available in this particular column posting to walk you through all of the stated recommendations (they are also elucidated within the Study 1 report). I would like to point you to other of my column postings that have covered many of those recommendations, plus ruminating on other considerations too.
For example, you might find informative my piece entitled “AI Ethics And The Geopolitical Wrestling Match Over Who Will Win The Race To Attain True AI” (Lance Eliot, Forbes, August 16, 2022), see the link here. The article discusses the frenetic race underway to attain so-called true AI, nowadays referred to as Artificial General Intelligence or AGI, and covers these points:
- If this is a race, the AGI finish line seems quite ill-defined
- The AGI race might go to a person, an entity, or a nation
- Metrics and how nations are being compared in the AGI race
- Geopolitical maneuvering and alignment for the AGI race
- International AI laws and AI Ethics as referees in the AGI race
Another page-turner that you might find engaging is my piece entitled “AI Ethics And The Looming Political Potency Of AI As A Maker Or Breaker Of Which Nations Are Geopolitical Powerhouses” (Lance Eliot, Forbes, August 22, 2022), see the link here. This article pursues the contention that AI is going to be or already has entered into the rarified air of crucial capacities that nations need to wield global clout, thus the usual seven factors deserve to be expanded to eight (adding AI):
- 1) Social and Health Issues
- 2) Domestic Politics
- 3) Economics
- 4) Environment
- 5) Science and Human Potential
- 6) Military and Security Issues
- 7) International Diplomacy
- 8) Artificial Intelligence (newest addition, proposed)
Conclusion
Not everyone believes that AI needs to be fed through a global multilateral gauntlet. We should give skeptical or sad face viewpoints a moment to express their grave suspicions and troubling misgivings.
Here you go.
Some countries are dubious that such global cooperatives are worthwhile. You might as well toss your precious gold and monies into an abyss. The odds of getting anything substantive from these global bailiwicks are perceived as relatively negligible. Pour the same investment into your own country and let others foolishly expend their coinage on those international boondoggles.
Another concern often raised in the AI realm is the desire to be there first. If you get mired in these joint efforts, it might be the case that everyone shares equally in AI breakthroughs. This can be beguiling if you believe that your country was otherwise going to get their first. The notion is that countries can wield potentially greater power and wealth if they are able to achieve advanced AI before other nations do so.
The qualms are seemingly endless.
Some skeptics suggest that these are merely global bureaucratic arrangements. The bureaucrats come out looking good. Meanwhile, a fraction of each dollar spent goes to the actual AI research and advancement. If you want to see your national bucks produce the most bang for the buck, stay out of these international money pits.
You can also sense sometimes a flavor of conspiracy theories enmeshed in this too. For example, suppose the other nations in the global cooperative opt to steal your AI advances. All that expenditure and pride in your national AI endeavors is simply handed over to those that didn’t do the hard work to get there. They bought into the pot of gold via a pittance. Outrageous.
Leaders in a nation might also be concerned about a blowback within their country. Why in the heck did you allow the country to get embroiled in an AI effort that dilutes and gives away your nation’s own AI secrets and discoveries? Out with your head. No one wants that kind of reaction. As such, stepping into something like this has got to be weighed in terms of personal leadership tenure and support within your own nation-state.
Yikes, you might be wondering, how do any of these global multilateral arrangements ever see the light of day?
All in all, in the AI field, there is an overall semblance that we should be avidly pursuing these global multilateral arrangements. It is abundantly challenging. Some nations want this, while others do not. A given nation might change its mind. Throughout the lengthy timeframes involved, a nation can drop out for any variety of reasons. Other countries might decide to jump in. Ones already seeming on board might change their minds as to what they want to see occur.
As the old saying goes, it can be like herding cats.
I dare suggest that those steeped in AI Ethics and AI Law would vehemently support such initiatives (on the balance, all else being equal). Of course, it has to be done in the right way and with the right conditions involved. Having an AI-oriented global multilateral arrangement simply for the sake of having one will be unlikely to garner much lasting support. Plus, the odds of producing tangible everlasting results would lamentably be undoubtedly low.
Some final remarks for now.
At the start of today’s column, I asked you whether you were a global optimist or pessimist when it comes to multilateral arrangements. In the specific context of AI, how do you feel about it?
Oscar Wilde famously said that the basis for optimism is sheer terror. If you believe that AI is altogether an existential risk and that humankind will live or die based on where AI proceeds, you probably are leaning toward optimism that global multilateral arrangements concerning AI are a good and meritorious gambit.
There is a bit of carrot and stick involved in calculating whether to seek and participate in an AI cooperative initiative. Join because you believe that AI is going to be a tremendous advantage for humanity and reveal amazing and heretofore mystifying truths about cognition, the mind, and sentience. That’s a carrot. For the stick, join because you fear that AI will go beyond our control otherwise and we will be managed and led by AI overlords, serving at their AI whims and potentially perishing as they see fit.
Pick one or both.
And for those of you sincerely trying to get AI large-scale global multilateral arrangements established and off the ground, you might relish a saying by the great inventor Thomas Edison: “Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.”
Let’s do our best.
Source: https://www.forbes.com/sites/lanceeliot/2023/04/15/ai-aspiring-toward-all-in-global-geopolitical-multilateral-unity-arrangement-for-sake-of-humanity-says-ai-ethics-and-ai-law/