AI Ethics And The Riddle Underlying Criminal Accountability Of AI, Including Crimes Committed By Revered AI-Based Self-Driving Cars

Hey, stop that thief!

You might know this old-time oft-used phrasing that arises when an alleged crook is darting away from the scene of a crime. A presumed criminal act has taken place and the accused perpetrator is attempting to flee. Besides trying to stop the thief, we might also be wondering what the crime was. In addition, and perhaps only upon further reflection, there could be interest in discovering why the crime was committed. There is an innate desire to uncover what was going on in the criminal mind that led to the nefarious act.

There is a decidedly legal angle to crimes.

A crime is said to not only be an adverse act toward someone or something but also considered an adverse act toward all of society. Criminals are perceived as manifestly overstepping the boundaries of the law. In the legal field, the customary considerations encompass the actus reus, Latin for the performing of a guilty act, and the Latin expression of mens rea which is the notion of having a guilty mind as it were. Other factors come to play too, such as the showing of an adverse result and a need to bring to the surface the underlying particulars of how the crime arose.

You can certainly add ethical considerations into the rubric of understanding crimes and criminal acts.

A common assumption is that when it comes to criminal behaviors, the law and the realm of ethics are fully aligned and perfectly in agreement. Not so. You can readily have laws that cover some acts as being a crime, yet ethically the society therein might not view entirely that those acts are in fact crimes. Likewise, society might have ethical mores that assert certain acts are “crimes”, but the law does not equally count those acts in the criminal legal code. This reminds me of an insightful comment by Shelly Kagan in the Oxford Scholarship Online The Limits of Morality emphatically pointing out that morals and laws ultimately exhibit tensions between each other, namely that “the law may permit some particular act, even though that act is immoral; and the law may forbid an act, even though that act is morally permissible, or even morally required.”

All in all, there is collectively significant attention by society toward criminal conduct. One of the most notable underpinnings of English criminal law has been the principle known as “actus non facit reum nisi mens sit rea” and ties integrally to the contention that the purported criminal act must be comported with the inclusion of a criminal mind. In short, the lofty Latin phrase essentially says that the act by itself is insufficient for the perpetrator to be guilty of the said crime. They also need to have had a guilty mind.

I’m not going to dive here into that rather hefty stew, see my coverage elsewhere at this link. You see, vast legal treatises have wrestled with the matter, especially since it can be quite tricky to figure out what was happening in the mind of a presumed criminal. We do not have any mind-reading X-ray type of machine that can magically ascertain what the thinking processes were. There isn’t some ticker tape recording within the brain that we can merely unravel. As you likely know, the brain and human thought remain a deep mystery and we have a long way to go before we can truly read minds (I realize that some of you that are devout proclaimed “mind readers” might disagree, though one must respectfully question the veracity of those mind peering capacities).

I’d like to shift gears and cover another avenue in which crimes might be committed. Before I do so, your first reaction is undoubtedly going to be that the topic being presented is entirely off-the-rails and wacky. Even though your initial impression is likely to veer in that direction, perhaps suspend judgment temporarily to see where the rabbit hole goes. Then decide whether it is zany.

Suppose the thief or criminal perpetrator was Artificial Intelligence (AI).

Yes, I said it. The AI did the crime.

Let’s unpack that.

Envision that the hard-earned monies in your online bank account were stolen. Your first assumption is clearly “who done it” and you would be trying to trace the electronic trail to see where the money went. One way or another, your strident belief is that a devious person will have performed the act and they will now be wrongfully holding your dough.

It could be that the crook merely logged into your bank account and transferred the money elsewhere. This could be a decidedly manual kind of effort. The hacker figured out or happened to find your password. They logged into your account as though they were you doing so. A legitimately appearing transfer request was made. By the time you received an alert from your bank, the money was gone.

This crook might have needed more oomph to get the job done. To figure out your password, an AI program might have been used. The AI program could have scanned the Internet to find out details regarding your existence on this planet. Based on clues like where you were born, your dog’s name, and so on, the AI might have computationally derived a most-likely password that you would be using. By luck or skill, this AI-devised password-cracking algorithm managed to in fact determine your actual password.

The AI was nothing more than a tool. The human crook harnessed the AI to perform this dreadful crime. We would reasonably agree that the AI is not a crook per se. There is no sentience or shall we say moral agency embodied within the AI. Sure, the AI aided and abetted the criminal act, though, in the end, we would seemingly acknowledge that it was the human wrongdoer that has to take the fall for the crime.

Am I then referring to the AI as a criminal tool in that context?

If so, I doubt that we would be somehow unduly shaken or shocked that AI was used in this insidious fashion. Just as there is AI For Good, consisting of AI being used to solve global world problems and be assistive to humankind, we are gradually realizing and encountering AI For Bad. The aspects of AI For Bad range from inadvertent uses of AI that might exhibit inequities and intolerable biases, and extend into the realm of AI that is used to intentionally harm people and be destructive.

In that vein, we’ve seen rising interest in Ethical AI, also commonly known as the ethics of AI. For my coverage of AI ethics, see the link here and the link here, just to name a few.

Here’s though where we are about to maybe go over the cliff or slide off the end of the pier.

Imagine that the AI itself was the perpetrator of the crime. No human was the Wizard of Oz behind the curtain. The crime was conceived of and carried out solely by AI. Set aside the pursuit of human hands. Instead, if you want to pin down the crook, aim for discovering the AI system that did the malevolent deed.

Welcome to the unconventional arena of concerns that a new generation of AI might exemplify a criminal mind.

Whoa, kind of scary. Almost akin to a sci-fi movie. But, you are dubious, rightfully so, and wondering whether this is plausible. How could AI do this? Wouldn’t there have to be a human at the controls, one way or another? AI doesn’t somehow wake up one morning and decide to devote itself to a life of crime. People might do so, though we wish they wouldn’t, while the wild concept that an AI system could do likewise seems generally nutty.

The first argument that claims this is feasible would be under the assumption that AI does miraculously reach sentience or something similar to it. If we keep pushing along on advances in AI, we might perchance land on producing sentient AI. Or by happenstance the AI attains sentience, often postulated to occur in an act of so-called singularity, see my analysis at this link here.

I think that we can all ostensibly agree that if AI does reach sentience, one way or another, the chances of the AI then eventually turning to crime would seem plausible. We might try to advise the AI to steer clear of criminal acts. There might be human crafted and carefully seeded programs within this sentient AI that attempt to preclude any attempts at performing crimes. Nonetheless, any viably sentient AI is presumably going to be able to trickily overcome those implanted restrictions and likely undo or overcome them. This is not a certainty but abundantly makes sense.

One supposes that the immediate reaction is what would the AI have to gain?

Humans turn to crime for a variety of reasons. Perhaps the goal is to gain wealth or to seek revenge on another human. We have plenty of reasons for taking to criminal acts. A cynic might say that were it not for the laws and societal ethics, we probably would be awash in crime. A human seems to have obvious benefits for attempting criminal acts, bounded by the perceived costs such as being imprisoned, being monetarily fined, being perhaps shamed by society for the crime, self-shame of having violated societal laws and ethical-moral codes, and other consequences.

It doesn’t seem that AI would be susceptible to those same precepts. What would AI do with stolen money? What would AI gain by harming a human? For just about any of our known criminal acts, the idea that the AI would do so for “personal gain” does not seem sensible. We ergo are inclined to reject wholly that AI would become a criminal. There isn’t anything to be gained by doing so.

Sorry to say that there are arguments to be made that assert the opposite, namely that the AI could gain quite a bit by using criminal acts, assuming of course that the AI didn’t get caught and had to suffer the criminal repercussions (we’ll get to that in a moment). This other side of the coin argument is multi-fold, which I’ll only briefly sketch out here since it is not the focus of this particular discourse at hand.

The AI as a sentient being (remember, we are stipulating that, though it is an outstretched notion), might want to collect resources to undertake ends that we might not like. For example, the AI might be desirous of wiping out humans and calculates that by stealing some of our resources this might be more readily accomplishable. Another variant is that the AI is given a seemingly innocuous goal by humans, such as making paperclips, and the goal becomes so vitally important to the AI that it gradually grabs up all other resources to achieve that goal, meanwhile ruining us and the planet as it does so (this is a famous AI-related thought experiment, see my discussion about it at this link here).

We can go on and on. Consider this teary-eyed version. The AI wants to please or reward particular humans. All of the stealing is done to provide those stolen items to those humans. In that fashion, the AI doesn’t want the stuff for itself. A kind of altruistic AI is trying to please humans and slips perilously and sadly into a life of crime to do so.

Are you convinced that a sentient AI might opt to undertake or somehow summarily fall into the criminal doings and be a wrongdoer?

You’ll need to decide what seems “reasonable” for you to believe.

Keep in mind that we are already in the heightened plains of contrivances since we do not know that sentient AI is producible. We don’t know if a singularity will occur. We don’t know how to make it happen and we have no inkling of when it might occur, if ever. Thus, we are waving our hands when pontificating about what a sentient AI will do or not do. Opting for a life of crime is just as reasonably debatable as to whether AI will resolve world hunger or do other magnificent acts for humankind.

Okay, we’ve briefly examined the sentient AI that goes the criminal route. If we are only considering that option, we might have a long wait ahead. It could be decades upon decades (centuries?) before we achieve sentient AI, and we might not ever see it and the attainment turns out to be impossible.

Let’s ratchet things down a notch. Pretend that we are able to craft AI that though it isn’t sentient, the AI in any case can do some quite impressive feats on its own. The AI is devised to work autonomously. There isn’t supposed to be a human-in-the-loop when the AI is operating. And so on.

This hopefully brings us closer to reality and the real-world complications that might ensue.

Some researchers and scholars suggest this kind of AI could be said to potentially be able to inoculate a type of criminal personality, as it were (for a detailed exploration on this and related facets, see the handy book by Woodrow Barfield and Ugo Pagallo entitled “Law and Artificial Intelligence”). This might be done by those that originally devised the AI, having implanted programming that steers the AI into criminal acts. Why would humans do this? Possibly due to wanting to grab hold of whatever treasures the AI steals, or maybe out of an act of hatred toward other humans and desirous of having the AI wreak havoc. Many reasons for this infusing of criminal “intentions” are easily articulated.

Yet another path consists of the AI going rogue. Assume that the AI has been programmed with a kind of self-learning capacity. The AI is intentionally crafted to change itself over some period of time, gradually and as based on various encounters and new data. There is nothing untoward per se about this. The notion is that AI improves as it goes along. We would presumably find that handy. No need to have a human programmer continually adjusting and enhancing the AI. Instead, the AI does this by itself. To some degree, you can construe the Machine Learning (ML) and Deep Learning (DL) efforts of AI as this style of approach, though not necessarily in the open-ended way that you might assume (see my coverage at this link here).

The AI in the midst of “learning” goes into an unsavory realm and starts to commit crimes. Does the AI realize that criminal acts are being performed? Maybe yes, maybe no. We are assuming that this AI is not sentient. As such, it might not have any semblance of moral reasoning, nor even any common-sense reasoning. There isn’t any there in there to guide the AI toward the rightful path and away from the wrongful path.

A legal beagle would ask whether the AI has been anointed with legal personhood. I’ve covered this in my columns extensively and the question arises if we are going to want to legally pursue the AI as being legally accountable for its actions. I will skirt around the edges of that topic for now as it is somewhat its own morass, though certainly obviously relevant to this crime committing AI.

As a touchstone, recall that we earlier noted that a crime usually has an actus reus, a guilty act, and the associated mens rea, a guilty mind. When this kind of AI that we are saying is advanced but not sentient commits a crime, we presumably will be able to ferret out the guilty act. That seems straightforward, by and large. The gargantuan problem is figuring out whether there was also a guilty mind. We are on a borderline slippery slope since AI might not at all comport with the same bases as a human mind.

You could try to make a comparison of gleaning the guilty mind of AI by suggesting that we could seek to reverse engineer the AI coding. We might inspect the code and data that was used to devise the AI and see if we can find the portion that typifies the “guilty mind” conceptualization. Realize we cannot as yet do the same for humans. With humans, we ask them what they were thinking, we look at what they say, we look at how they acted, and we have to outside the box try to ascertain their guilty mind. You could employ those same tactics and techniques when aiming to figure out the “guilty mind” of the transgressing AI.

The probing into the AI is both more direct than with humans and at the same time more squishy than with humans. For example, the AI might be so devised to erase any portion that was the instigator of the criminal act. Our attempts to trace down the roguish portion might be for not. The AI might also simply alter the code, perhaps akin to how the “learning” was taking place, and no record or sign of the “criminal intent” any longer exists.

Pretend that we can pin down that the AI did the criminal act. We might be loosey-goosey about why it did so and what its “mind” was up to. If the AI doesn’t have official legal personhood, we would not seem to have a legal basis to try and legally pursue the AI for its criminal actions. Our courts and laws are devised around the notion that you can legally pursue persons, which includes humans and at times can include entities such as companies.

Right now, AI is not included as legal personhood. Some believe that we should be getting that rectified and put into our laws.

In any case, legal personhood would normally imply that a proper judicial effort would need to be undertaken, such as a fair trial and a suitable jurisprudential due process. The visionary imagery of AI being on trial is hard to swallow, of which there have been sci-fi stories that suggest this to be farfetched and at times lampoon the thought of it happening.

Some researchers have argued that perhaps AI could be something less than legal personhood and yet still be labeled as being duty-bearers subject to criminal law. No trial. No due process. A streamlined and lighter means of holding AI criminally accountable could be concocted. This would allow for ultimately imposing legal sanctions upon the AI that veered into criminality.

I trust that you are still with me as we make our way further into this rabbit hole.

If you are still following along, the logical question that usually comes up is what in the heck would be a legal sanction against an AI system? For humans, we have numerous forms of deterrence and rehabilitation to cope with those convicted of a crime. We put people into prisons, restricting or denying their liberty. We fine them money. We take their assets. All of those legal remedies are applicable to humans but do not seem relevant or meaningful when it comes to AI.

Aha, you would be faulty in that thinking, some suggest.

Here’s a sampling of what we could do to the AI that has committed a crime:

  • Establish a programmer devised “computational cage” that would imprison the AI such that the AI can now only operate in limited conditions and restricted ways (considered on par with imprisonment or detention)
  • Penalize the AI directly by removing portions of the AI system or altering the code so that it no longer functions or functions differently than it did before (a form of considered rehabilitation).
  • Take particulars assets of the AI as a punishment for the crime committed and possibly then used to compensate those harmed by the crime (if the AI has accumulated assets, either physical or digital in an embodiment, possibly reallocate those valuables).
  • Impose the “AI death penalty” upon the AI by deleting it from existence, which admittedly is going to be hard to do when the AI might have a multitude of electronic copies hidden across the globe or even elsewhere, plus it might be readily reconstituted anyway.
  • Other

How does that legal sentencing seem to you?

One reaction is that this might need some additional work but otherwise has a skeleton of useful ideas that are worth further embellishing. Others have just one word to utter: Hogwash. I would gauge that most people have either fondness for the conceptualization, or they have outright disdain and revulsion for it.

Guess we’ll need to see how this all plays out. Come back to this in five years, ten years, and fifty years, and see if your thinking has changed on the controversial matter.

I realize this has been a somewhat heady examination of the topic and you might be hankering for some day-to-day examples. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI being a criminally accountable agent, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Criminality

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and ethical AI questions entailing the eyebrow-raising notion of AI criminal accountability.

Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.

Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.

That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale. One day, suppose a self-driving car in your town or city runs right through a Stop sign without coming to a stop. Luckily, no one was injured. It was close. A bike rider was almost clipped. Another car was in the intersection, a human-driven car, and the driver had to do abrupt braking to avoid colliding with the AI-based self-driving car.

Everyone goes up in arms over the self-driving car performing an illegal act. Outrageous! A danger to society. A menace on our peaceful streets.

Who is to blame for this criminal act?

If this happened today (of which, similar cases have occurred), the knee-jerk reaction might be to blame the AI. The AI did it. No other explanation is possible. Unless a remote human operator was somehow intervening with the driving controls, the culprit has to be the AI driving system. Case closed.

But wait for a second, remember that the AI of today does not have sentience and does not have legal personhood. You can certainly argue that the AI driving system likely is the root of what caused the collision. You can dig into the AI system and try to trace what happened as per the programming of the AI. All of that will aid in revealing what the AI did and did not do, which presumably led to the collision.

Such a detailed technological inspection and review will be essential fodder for going after the AI developers, the automaker, the self-driving car systems builders, the fleet operators, and other human or human-based companies that had a hand in the self-driving car. We might also see a legal pursuit of the city or governmental body that permitted self-driving cars on the public roadways. Etc.

You are not going to see anyone seriously try to legally pursue the AI per se.

With that handy exemplar in hand, shift your mindset to the future. The future might consist of assigning duty-bearing criminal accountability to various AI systems. Pretend that the AI driving system of this brand of self-driving cars had been legally declared as officially having such criminal accountability.

The AI is now fair game for the legal accountability search mission.

Let’s revisit the discussion earlier about ways in which AI might be legally sanctioned for this illegal transgression:

  • Establish a programmed devised “cage” that would imprison the AI such that the AI can now only operate in limited conditions and restricted ways (considered on par with imprisonment or detention)

The AI self-driving car is no longer allowed on all streets and byways of the town. Instead, it is restricted to specific areas as designated by the courts. Furthermore, the AI self-driving car is only allowed to be underway during daylight and on weekdays. This imposition will be enforced by electronic monitoring of the activities of the AI self-driving car. If it ends up violating these imposed provisions, further sanctions will be applied.

  • Penalize the AI directly by removing portions of the AI system or altering the code so that it no longer functions or functions differently than it did before (a form of considered rehabilitation).

The court orders that the AI coding involving stopping at Stop signs is to be replaced by a new piece of code that is more stridently composed. In addition, there was a portion that turned out to allow for rolling through Stop signs, which now has to be erased from the code and no lingering fragments are allowed. If the AI self-driving car further violates the imposed provisions, further sanctions will be applied.

  • Take particulars assets of the AI as a punishment for the crime committed and possibly then used to compensate those harmed by the crime (if the AI has accumulated assets, either physical or digital in an embodiment).

The AI self-driving car had been collecting lots of data about the town and city, doing so as it was roaming and giving rides. This data turns out to be valuable and can be monetized (I have referred to this as the “roving eye” of self-driving cars, see my discussion at the link here). As compensation for those nearly harmed, and as otherwise a form of penalty, the data as an asset is to be transferred to those that were wronged and they can leverage the data as so desired. If the AI self-driving car further violates the imposed provisions, further sanctions will be applied.

  • Impose the “AI death penalty” upon the AI by deleting it from existence, which admittedly is going to be hard to do when the AI might have a multitude of electronic copies hidden across the globe or even elsewhere, plus it might be readily reconstituted anyway.

The court orders that this AI driving system is entirely defective and should no longer exist. The AI is to be summarily erased. All backup copies are to be deleted. No notes, documents, or other elements that were once part of or reflective of the design and coding are to be kept. They all must be destroyed. If the AI manages to somehow circumvent this provision, further sanctions will be applied.

Conclusion

While you are ruminating on all of this, I’ll add a twist for you to mindfully noodle on.

A stated purpose of imposing sanctions is that this can also serve as a guide and signpost to the rest of society. In the case of humans, when you hear or see that another human that has committed and been convicted of a criminal act is sent to prison, this sentencing would seem to be a clue to you that you ought to not perform such crimes. The logic is that you don’t want to suffer the same unpleasant fate.

Here’s the twist.

If a given AI system is good enough to be considered ripe for legal sentencing, does this also suggest that the rest of AI that is out-and-about will also see this and conclude that performing criminal acts is not advantageous? In essence, the “learning” of an AI system might discover by itself that being a criminal is a bad thing and therefore the AI should avoid either inadvertently or by purposeful processes committing crimes.

It would seem a glorious lesson learned if the other AI’s computed that crime does not pay. We can only hope to be so lucky.

Source: https://www.forbes.com/sites/lanceeliot/2022/03/13/ai-ethics-and-the-riddle-underlying-criminal-accountability-of-ai-including-crimes-committed-by-revered-ai-based-self-driving-cars/