Nations Trading Their AI As Geopolitical Bargaining Chips Raises Angst For AI Ethics And AI Law

Share and share alike.

There’s plenty to go around for everyone.

Some believe that those longstanding pearls of wisdom apply to Artificial Intelligence (AI). You see, some nations are further along in their AI advancement than others. Worries are that this suggests there will be AI haves versus AI have-nots. Perhaps the proper or civil thing to do is make sure that all nations get an equal share of the AI pie, as it were.

Wait for a darned second, some angrily retort, consider the famous line that to the victor goes the spoils.

If a given nation invests deeply in advancing AI, it would seem proper and profusely fair that they would benefit more so than other nations that don’t make the same investment. Do you honestly believe that other nations should ride for free on the coattails of the nations that make AI a top priority? Remember the classic tale of The Little Red Hen, whereby the hen made the bread all alone and kept asking other barnyard animals to help, but they didn’t, and in the end, the scrumptiously freshly baked bread went to the hen while the others missed out.

No freeloaders in this world.

As such, the share and share alike don’t make sense if the sharing nations aren’t all equally sharing in the crafting of the deed itself. Look at it this way. If each nation did in fact do its share of AI advancement, they all would have something to trade with each other nation. The result would be a trader’s paradise. I trade you my AI, you trade me your AI. It would be as though the hen and the rest of the barnyard each made some bread and opted at the end to give each other a smidgeon of what they each made.

This brings up a topic that not many are yet delving into.

I’m talking about the global geopolitical trading of AI across and amongst nations.

A mouthful.

You betcha, and it’s all because AI is rife for being globally traded.

Please realize that not all AI is the same. There is AI for playing games. There is AI for healthcare and medical uses. There is AI for financial analyses and doing monetary calculations. There is AI for farming and agricultural purposes. I can go on and on. All manner of AI exists and is also being further programmed and devised. Today’s known or explored uses for AI have only scratched the surface. A sure bet is that AI is going to keep being extended and advanced. AI will inevitably and inexorably be in every corner of the planet. To say that AI will be ubiquitous is not especially an overstatement.

Nations are gearing up for the advent of widespread AI.

I’ve previously discussed that there is an ongoing and at times aggressive AI Race going on between nations as to which nation will have the best or most advanced AI over all the others – see “AI Ethics And The Geopolitical Wrestling Match Over Who Will Win The Race To Attain True AI” at the link here (Lance Eliot, Forbes, August 15, 2022).

In addition, I’ve pointed out that there is a lot of potential political power that can arise in a nation as a result of its holding onto or hoarding the latest in AI advances – see “AI Ethics And The Looming Political Potency Of AI As A Maker Or Breaker Of Which Nations Are Geopolitical Powerhouses” at the link here (Lance Eliot, Forbes, August 22, 2022).

All of these AI machinations carry very significant AI Ethics and AI Law ramifications. We want AI to abide by various Ethical AI precepts or “soft laws” as to how AI is composed and used. Meanwhile, slowly but surely, on-the-books laws and regulations about AI are being debated and put into place. AI Law is going to be a tremendous tool in trying to deal with AI and where we as a society go with AI. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

Today’s column is going to be about a related challenge and opportunity, namely the trading of AI from one nation-state to the nation-state. You can liken this to horse trading, though instead of horses we’ve got advanced AI and the rather daunting chances of either aiding the plight of humankind or potentially ruining the future of humanity (known as dual-use AI, see my analysis at the link here).

That’s one heck of a horse-trading conundrum.

AI is pretty big stuff these days. You’ve certainly read or heard that AI is a kind of existential risk, see my coverage on this at the link here. There is a chance that AI gone wild will put us into quite an untoward bind. Furthermore, there are worries that if we are somehow able to attain sentient AI, which we decidedly are not there as yet, the sentient AI might be as smart or even smarter than humans. The super smart AI might figure out a way to become our overlord. We could become enslaved to AI. One supposes that AI might be able to wipe us out if it wanted to do so.

The gist is that if one nation hands over to another nation the keys to the kingdom of AI, we can’t be sure what the other nation will do with it. Set it free and allow the AI to progress and destroy all of the earth. Try to cage AI and keep it from being an evildoer. The range of possibilities is endless. The outcomes range from good to bad, including horribly and devastatingly bad.

It is tempting to assert that each nation should hold AI closely to its own chest.

Perhaps each nation would be wisest to bottle up its AI. That’s somewhat the predicament underlying the crazed race toward AI by nation-states. A nation-state might believe the astute thing to do is figure out AI and harness the AI to the bidding of that nation. Imagine the kind of political power that a nation-state can garner by being ahead of everyone else on AI. This is akin to a nuclear armaments race, though the problem and difference are that AI is a lot slippier.

To get AI across and over into another nation is not especially a big deal. We have the Internet as an electronic connection that generally is accessible in most nations. You can sneak your AI into another land, doing so while sitting in your bedroom wearing your pajamas. No large trucks or heavy shipping crates. Just push a button to electronically transmit the AI that you want to share with another country.

One and done.

Plus, you can’t particularly change your mind and take the AI back. It is usually feasible to make a copy of the AI. In that case, when some nation emphasizes that you had better give back the AI, you can send it to them and proclaim that you have returned it. Meanwhile, a zillion copies exist and you can use them to your heart’s desire.

There are ways to encrypt AI. There are ways to include passwords. You can sneakily insert into the AI some kind of backdoor so that you can perhaps, later on, use it to disable it. I mention this because some of you might be exhorting right now that the AI could be generally made to be turned off once it has been handed over if need be. The thing is, there are many ways to circumvent those precautions or undercut them, entailing an exhausting and costly cat-and-mouse game of cybersecurity.

By and large, you have to accept the notion that once the AI is given to another nation, there is a good chance you’ll never get it back. There is also a good chance that they can keep using it, despite your desire as a nation to have them stop doing so.

Of course, a nation could use all manner of other heavy-handed geopolitical pressures to get another nation to stop using an AI system. Threats can be made of a military nature or an economic nature. Negotiations and another nation-state bargaining can take place.

Here’s a twist that you might not have been thinking about.

If a nation provides AI to another preferred nation, has the cat been let out of the bag?

The concern is that a perceived allied nation might inadvertently let the AI be shared with another nation that is not on the preferred list. Besides doing so inadvertently, the allied nation might intentionally hand the AI to a non-preferred nation. Why in the heck would an allied nation do this kind of “backstabbing” of giving the precious AI to a nation that was not considered on the preferred list?

Bargaining chips.

AI can be a quite useful bargaining chip. A small nation that wants to seem big could trade AI for something else. Need more oil? Trade your AI. Need food and supplies? Trade your AI. Want to just get into a favored nation status with some other nation? Tempt the nation by proffering some juicy AI that they don’t already otherwise have available.

AI is coinage. AI is like gold bars. AI is a resource to be traded back and forth. The beauty is that it is electronic and can be done without anyone especially noticing. If a nation puts crates of gold onto a ship or hefty plane, someone is bound to notice. Transmitting an AI system to another nation is done in the dark and without an easy way to trace it.

Let’s get back to the haves and have-nots.

A nation that doesn’t have AI might be categorized as AI-starved. There is usually a sizable cost associated with devising AI, including the labor costs which can be pricey, along with the use of vast computer servers. Some nations have this, but many do not. Their limited resources are going toward more basic survival purposes.

Some believe that the United Nations will eventually have to weigh in on balancing out AI across the globe. The UN has already been engaged in considering AI Ethics and AI Laws, which I’ll mention more about in a moment. It could be that the United Nations becomes a kind of AI clearinghouse regarding the trading and sharing of AI.

Preposterous, some insist.

Nations can do with AI as they wish, the fervent argument goes. An intermediary is unnecessary and bothersome. Any nation that wants to trade any particular AI with any other nation ought to be freely able to do so. The open and unfettered marketplace of the world should determine where AI goes.

A counterargument is that if AI has the potential for undermining humanity, wouldn’t it just make plain decent sense to put some form of control on who gets that AI? Only responsible nations should have certain kinds of AI. By setting up a global clearinghouse run by a presumed neutral third party, perhaps we can keep AI from getting into the wrong hands.

The porous aspects of being able to move AI around from nation to nation make such arguments a bit harder to accept. It could be that an AI nation-to-nation clearinghouse is nothing more than a bottleneck. It is a bureaucracy that will putatively inhibit the use of AI that is well-needed, and not do much about preventing or mitigating AI that’s bad.

Round and round these heady global debates about AI are bound to go.

Take a moment to noodle on these three rather striking questions:

  • What should we be doing about the global trading of AI by nations to other nations?
  • Do AI Ethics and AI Law have any value to add to this vexing matter?
  • What kinds of AI trading arrangements might be envisioned (now and in the future)?

I’m glad that you asked.

Before diving deeply into the topic, I’d like to first lay some essential foundation about AI and particularly AI Ethics and AI Law, doing so to make sure that the discussion will be contextually sensible.

The Rising Awareness Of Ethical AI And Also AI Law

The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.

I want to make abundantly sure that we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Be very careful of anthropomorphizing today’s AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.

In the AI Bill of Rights, there are five keystone categories:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration, and fallback

I’ve carefully reviewed those precepts, see the link here.

Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the heady topic of exploring the global nation-state trading of AI.

Boon In Trading AI For Trading Nations

Let’s revisit my earlier postulated questions on this topic:

  • What should we be doing about the global trading of AI by nations to other nations?
  • Do AI Ethics and AI Law have any value to add to this vexing matter?
  • What kinds of AI trading arrangements might be envisioned (now and in the future)?

The first question is usually answered in a few crisp words.

The question posed: What should we be doing about the global trading of AI by nations to other nations?

The pithy answer by some: Nothing at all.

That’s right — don’t do a darned thing. It isn’t anyone’s business to be poking into the AI trading among nations. Nations do as they want to do. Let them be.

In fact, it is sometimes argued that if there is an attempt to clamp down on the nation-state trading of AI that doing so will hamper AI all told. You will be discouraging new innovations in AI. The viewpoint is that AI has to be kept free of constraints right now. We are in a stage of AI development that can only get us to true Artificial General Intelligence (AGI) if we let all hands participate.

I’m sure that you know the counterarguments. We are playing with fire. AI is the fire that could burn down all our houses. You need to put in place AI Ethics to try and keep AI from becoming a destructive maelstrom. The same goes for putting in place AI Laws. AI has a grand potential for grand harm. The horse is already kind of poking outside the barn, don’t let it get the rest of the way out.

It could be that we might establish a requirement that any nation trading AI has to showcase that they have established national AI Ethics policies. The AI Ethics policies have to be firm and shown to be followed (no hollow efforts). Likewise, any nation trading AI has to establish applicable AI Laws, plus the laws have to be enforced (else it is fakery).

When a nation wishes to trade AI with another nation, the requirements of AI Ethics and AI Laws have to be first met and approved. A neutral third party such as under the auspices of the UN might serve in this capacity. The hope is that this will prevent or at least reduce the outsized risks of AI being traded and used in destructive ways.

One notable big-time problem looms over this notion.

It seems highly unlikely that all nations would agree to such an arrangement. As such, the “rogue” nations that don’t partake will be able to willy-nilly trade AI as they see fit. All these other nations are trying to do the right thing. Unfortunately, they are going to be mired in doing the right thing, yet those other outlier nations are wantonly doing as they wish.

Scofflaws, scoundrels, malcontents.

Or they might see themselves as rightful, agile, and heroic in their AI trading practices and arrangements.

Speaking of AI trading arrangements here are ten of the cornerstone approaches that I’ve identified on this topic:

  • The AI Giveaway: Nation gives its AI to another nation for nothing in return (hard to imagine that nothing is expected, a rarity, or non-existent)
  • The AI Straight Ahead Trade: Nation gives its AI to another nation in trade for some AI that the other nation has (a straight AI-for-AI trade)
  • The Leveraged AI Trade: Nation gives its AI to another nation in trade for something else of a non-AI nature (e.g., oil, grain, food, cars, weapons, and so on).
  • The Developing Nation AI Trade: Nation gives its AI to a developing nation as a means of boosting or aiding the developing nation
  • The Over-The-Top AI Trade: Nation gives its AI to another nation but botches the trade by handing over crown jewels of AI for a pittance in return
  • The AI Trade Swindle: Nation gives its AI to another nation for some kind of traded aspect but unbeknownst to the other nation the AI is a dud (it’s a swindle)
  • The Multifaceted AI Trade: Nation gives its AI to another nation for a collective and complicated bunch of actual and expected traded items in return
  • The Multinational Dominos AI Trade: Nation gives its AI to another nation, while the receiving nation trades AI to some other nation, and that nation provides something as a return to the nation that began the sequence (a complex interconnected set or series of trades)
  • The Trojan Horse AI Trade: Nation gives its AI to another nation, though the AI contains some kind of trojan horse such that the giving nation will have some leverage or skullduggery over the receiving nation
  • Other AI Trading Practices

Mull over those types of AI trading arrangements.

I’m sure that we’ll be seeing those AI trading practices in action, though it might be kept under wraps and the public at large won’t know that these shenanigans are taking place.

Conclusion

I think we can nearly all agree that we ought to be doing something about nation-to-nation trading of AI.

The thorny question is what we can or will do.

Some say that we should be easygoing. Others exclaim that we need to be strongarming AI trading by nations. Yet another perspective is that maybe we should only be focused on AI of a certain kind when deliberating on trading practices. For example, the EU AI Act postulates various risk levels of AI, see my coverage at the link here. The nation-state AI trading might be said to apply to only the highest levels of AI risk.

Maybe just doing handwringing is all that we’ll do for the moment. In any case, keep my list of AI trading arrangements handy. You’ll need the list once AI trading by nations becomes a hot topic (it will mark my words).

I hope it is obvious that AI Ethics and AI Law are integral to this entire topic. Those that are doing serious and sobering work on Ethical AI and AI Laws can substantively aid in figuring out the nation-state AI trading conundrum. The same kinds of skills for examining societal considerations when it comes to AI are indubitably applicable in this particular use case.

A final comment for now.

Horse traders have a language all their own.

If a trader tells you that a horse is notably thoughtful and doesn’t let much get past them, you might assume this means that the horse is especially attentive and keenly aware of its surroundings. Seems ideal. On the other hand, a savvier school-of-hard-knocks interpretation is that the horse is probably skittish and reacts to the littlest distractor. Imagine trying to ride such a horse as it constantly gets sidetracked by a standing cow or a hawk lazily flying overhead.

Another fast-talking quip that a trader might try is that a horse is relatively tame and rarely bucks. Of course, this could be a subtle telltale clue that the horse is known to excitedly buck, perhaps at the worst of times. You might be riding along and your steed unexpectedly bucks you off when you are in the midst of a cactus patch or trotting through a quick-moving stream.

One of the most famous of all sayings about horses is that you shouldn’t look a gift horse in the mouth.

I bring this up to entangle the saying into the realm of Artificial Intelligence. If we get an offer from another nation to trade AI with them, and if they tell us that their AI is ideal and rarely bucks, please make sure that we take a close look at the mouth of that AI beast. It could be AI that we assuredly do not want.

Alternatively, it could be AI that decides it assuredly does not want us (a dour result, if you get my drift).

Source: https://www.forbes.com/sites/lanceeliot/2022/12/09/nations-trading-their-ai-as-geopolitical-bargaining-chips-raises-angst-for-ai-ethics-and-ai-law/