Pushing The Boundaries Of AI Ethics Into The Topsy-Turvy World Of Radical Ethical AI, Potentially Exemplified Via The Use Case Of Vaunted AI-Based Self-Driving Cars

Has the prevailing tenor and attention of today’s widely emerging semblance of AI Ethics gotten into a veritable rut?

Some seem to decidedly think so.

Let’s unpack this.

You might generally be aware that there has been a rising tide of interest in the ethical ramifications of AI. This is often referred to as either AI Ethics or Ethical AI, which we’ll consider herein those two monikers as predominantly equivalent and interchangeable (I suppose some might quibble about that assumption, but I’d like to suggest that we not get distracted by the potential differences, if any, for the purposes of this discussion).

Part of the reason that ethical and moral considerations have risen about AI is due to the gradually mounting spate of so-called AI For Bad. You see, originally the latest wave of AI was perceived as principally proffering AI For Good. This was the idea that AI could aid in solving many of the otherwise intractable problems that computing had heretofore not been able to aspire to aid. We might finally find ourselves leveraging AI to tackle many of the world’s toughest issues.

Along that somewhat dreamy journey, the realization has hit home that there is the other side of the coin. That’s the AI For Bad. For example, you might know about the brouhaha over AI-based facial recognition that has at times embodied racial biases and gender biases (see my analysis at this link here). Not good. There are now plenty of clear-cut instances of AI systems that have a plethora of untoward inequities built into their algorithmic decision-making (ADM) functionality.

Some of the dour issues within AI are ethically borderline, while other issues are pretty much beyond any reasonable ethical boundaries. On top of that, you can count too that the AI might be ostensibly illegally acting or outright a no holds barred illegal performer. The two-step of getting hammered by the law and by ethical mores is aiming to slow down the AI For Bad and seek to prevent the ongoing onslaught of AI that by default might be fully riddled with egregiously immoral and unlawful elements.

I’ve extensively covered the AI Ethics topic in my columns, such as the coverage at this link and also at this link here, just to name a few. One relatively common set of Ethical AI factors consists of paying attention to these characteristics of AI:

  • Fairness
  • Transparency
  • Explainability
  • Privacy
  • Trustworthiness
  • Dignity
  • Beneficence
  • Etc.

Those are the oft-cited social constructs that we ought to be mulling over when it comes to the crafting and fielding of today’s AI. The odds are that if you perchance take a look at an article or blog post that covers AI Ethics, you will find that the theme of the piece is centered on one of those aforementioned factors or characteristics.

I believe that by and large those that are vocally calling for Ethical AI would agree that those factors are worthy of attention. Indeed, the real problem seems to be that those making AI and those promulgating AI don’t appear to be getting the message. This has prodded those amid the AI Ethics movement to further expound on why these characteristics are so darned important. Regrettably, a lot of AI developers and firms leveraging AI either think this is nice to have, ergo merely optional, or they see the Ethical AI as more so an abstract academic exercise than something of everyday practical nature.

Per my predictions, a lot of those that have their head in the sand about Ethical AI will one day wake up and rudely discover that they are under the strident guise of society and also of the law. People will be quite upset upon discovering that the pervasive AI embedded into all manner of goods and services is peppered with inherent biases and inequities. Lawmakers are already pressing ahead with new laws to try and ensure that those promoting such AI are going to come under legally unambiguous rules, thus making it much harder for the AI builders and those that field AI to shrug off any adverse civil and prosecutorial criminal potentialities from their endeavors.

Without seeming to be flippant, the wheels of justice will inexorably grind their way toward those that are emitting and fostering untoward AI.

To clarify, the AI For Bad is not necessarily attributed solely to villainous actors. Turns out that the bulk of AI developers and those serving up the AI to the world at large are frequently unaware of what they’ve got on their hands. The sounding alarm about AI Ethics has apparently not yet reached their ears. You can have sympathy for those that are blissfully unaware, though this does not outrightly excuse them from their responsibility and ultimate duty of care.

I’d like to briefly mention that you should not fall for the insidious mental trap that the blame for the untoward AI lays at the feet of the AI itself. This is nearly laughable, though it keeps being used as an escape hatch and incredibly seems to work from time to time on those that aren’t familiar with AI of today. You see, the AI of today is not sentient. Not even close. We don’t know if AI sentience is possible. We don’t know when sentient AI might be attained.

The bottom line is that the AI blame game of distracting everyone by pointing a finger at the AI as the responsible party is disingenuous if not altogether abject trickery, see my elicitation about this at the link here. Existing AI is programmed by human developers. Existing AI is released and made available by humans. The companies that build AI and the companies that license or buy the AI for purposes of using the AI within their goods and services are entirely based on humankind. Humans are the responsibility bearers, not AI (perhaps, in the future, we’ll have AI of a different and more advanced semblance, and at that time we’ll need to wrestle more closely with the distinction of where blame resides).

As a quick recap, we ought to be thankful that the Ethical AI vocalizers are stridently trying to get society clued in about the ethical implications of AI. Right now, it is almost akin to the Sisyphus act of rolling a boulder up a steep hill. Many that should be listening are not doing so. Those that listen are not always prompted into corrective action. Just as Sisyphus had the unenviable task of pushing that boulder up a hill for eternity, you can pretty much assume that the AI Ethics cacophony will by necessity need to be an eternal ruckus too.

That seems to settle the matter about AI Ethics and we can presumably move onto another topic.

But, wait, there’s more.

You might be surprised or even shocked to learn that some assert that the existent body of Ethical AI considerations is remiss. The primary keystone of AI Ethics has gotten mired into a groupthink bailiwick, they exhort vociferously.

Here’s the deal.

My earlier indicated list of AI Ethics factors is said to have become a stifling preoccupation by the Ethical AI crews. Everybody mindlessly accepts those factors and keeps pounding away at those same factors, over and over again. Similar to an echo chamber, the preponderance of AI ethicists is repeating the same song to themselves, relishing the sound of it. Few are willing to break out of the pack. A herd mentality has overtaken Ethical AI (so it is claimed).

How did this conundrum arise? You can apparently explain this undesirable mainstreaming of Ethical AI as based on either happenstance or by design.

In the happenstance scenario, the key factors are exceedingly easy to focus on. You don’t have to go outside the box, as it were. Furthermore, those that are within the Ethical AI camp are likely to be more interested in similar work or ideas, rather than searching out notions beyond the norm. Birds of a feather flock together. Inch by inch, the AI Ethics arena is said to have huddled into a cramped cluster and generally is fine with the familiarity and comfort that it inures.

I’m sure that will get the ire of many AI Ethics participants (keep in mind that I am just mentioning what others are contending and not affirming those allegations).

If you want to rachet up the ire, consider the suggestion that the phenomenon has arisen by design. In a somewhat perhaps conspiracy theorist realm, it is said that the AI technologists and the tech companies are either directly or subliminally contributing to the narrowing of Ethical AI considerations.

Why would they do so?

We’ll ponder two possibilities.

One notion is that the AI technologists favor the existing set of Ethical AI considerations since those particular factors are somewhat amenable to being fixed via technology. For example, if you want explainability, AI technologists and AI researchers are working night and day to craft explainable AI (XAI) capabilities. The beauty then of the prevailing list of Ethical AI factors is that they fit within the purview of AI technology resolutions. Those that ascribe to that theory are apt to suggest that any other factors that aren’t readily AI techno-solved aren’t able to gain traction on the factors list.

You then have the other more nefarious variant. It is suggested that there are high-tech companies and others that want to provide essentially lip service to AI Ethics matters. In that sense, they relish a shorter list of technologically solvable concerns. Any attempt to add other concerns will make life harder for those tech companies. They want to minimize the investment needed to placate the Ethical AI proclamations.

I suppose that kind of allusion is bound to spur fisticuffs and go far beyond simple ire. Those that are tirelessly trying to abide by and contribute to Ethical AI are potentially being smeared by those claims, and you can readily imagine why they would be outraged at the seemingly unabashed and altogether brazen accusations.

Let’s not get a brawl started and instead take a sober and reflective moment to see what else the Ethical AI camp is supposedly missing out on. The other camp that is pleading to extend and expand the viewpoint of contemporary Ethical AI is at times being referred to as Radical Ethical AI.

You might wonder whether the moniker of Radical Ethical AI is suitable or unwelcomed.

The connotation of something that is “radical” can be good or bad. One interpretation of Radical Ethical AI is that it is an attempt to shake up the status quo of AI ethics. The use of the “radical” showcases that we need to make a concerted dramatic turn and not just slightly adjust or weakly render a modicum of a swivel. You could argue that using “radical” will bring a shock to the existing approaches akin to using a defibrillator that otherwise via usual convention would not be attained.

Others would say that by attaching the word “radical” you are creating quite a mess. Some would interpret radicalism as being outlandish, slipping toward eccentric or oddball. Centrists might immediately reject the very notion of whether there is a groupthink due to the naming alone as an offending contrivance. Tone down the rhetoric, some would say, and you might get more serious attention and interest.

Notwithstanding the naming issue, you might be innately curious as to what it is that this other camp believes that the rest of us are missing. Set aside any visceral reaction to the whatchamacallit.

You might outwardly ask, where’s the beef?

I’ll try to represent some of the alluded to missing ingredients.

First, one claimed oversight is that the prevailing view of traditional (old-fashioned?) Ethical AI is a constraining hidden assumption that AI is inevitable. Thus, rather than examining whether AI should even be in the cards for particular kinds of uses and societal circumstances, the predominant position is that AI is coming and we need to get it into the best shape possible. You might smarmily say that this is akin to putting lipstick on a pig (i.e., it is still a pig, no matter what you do to make it seemingly look better).

If you take that to the extreme, the qualms about fairness, privacy, and the rest are perhaps said to be like moving chairs around on the deck of the Titanic. This ship is still going to sink, meaning that we assume AI is going to happen, no matter what. But, maybe we ought to be reconsidering whether sinking is the only option afoot.

I’ll be getting back to this consideration in a moment.

Here quickly are some other claimed Ethical AI matters that are either not being given their due or that are squeezed out by the abundance of attention to other prevailing topics:

  • Besides using AI to manage humans, such as increasing the use of AI to make hiring and firing decisions in companies, there is a likewise qualm about the use of AI to manage animals. There are efforts underway to utilize AI in farms, zoos, and generally any place that animals tend to be kept. How will AI impact animal welfare?
  • There are times at which AI is seemingly being beta tested in especially vulnerable communities, doing so as a first try, and then the AI will, later on, be spread more widely. Keen ethical AI attention ought to be aimed at those practices.
  • A lot of discussions appear to be taking place about replacing workers with AI, though there is less so attention to the impacts of AI as a monitoring tool used to oversee low-wage low-status workers.
  • Insufficient attention would seem to be taking place regarding the ecological and environmental impacts of building and keeping the AI systems running (as an aside, you might find of interest my discussion about the carbon footprint associated with AI, see the link here).

·        And so on.

There are some specialists in Ethical AI that will likely right away point out that those are in fact topics that can be found in the literature and discussions about AI ethics. They are not unheard of and nor are they breathtakingly new.

I believe that the perspective of the Radical Ethical AI purists is that they aren’t claiming they have discovered completely untrodden ground. Instead, they are asserting that the existing focus of Ethical AI is choking out the breathable air of allowing other “outlier” topics from getting their due.

Perhaps a more palatable way to phrase this is that the prevailing AI Ethics attention has given less attention to other viably vital topics. You could suggest that there are blind spots and it would be handy to consider whether those can be brought further into the fold (this is likewise discussed in the AI and Ethics Journal in a recent paper entitled “Blind Spots in AI Ethics” by Thilo Hagendorff).

Now that we’ve covered the saga of whether Ethical AI is in a bit of a rut, which some would say is a preposterous contention and meanwhile others would insist that the rut is real and getting worse, perhaps we can take a moment to explore a use case to take a closer look at the matter.

In my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase the alignment dilemma so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars present any outside-the-box Ethical AI considerations that are generally being unchallenged or insufficiently surfaced?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Outliers Or Outcasts Of Ethical AI Considerations

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and whether any outside-the-box Radical Ethical AI considerations are supposedly not getting their sufficient due.

Due to space constraints, let’s focus on one particular matter that we can then dig into. Specifically, there is the overall charge that prevailing AI Ethics purportedly has a hidden assumption that AI is inevitable.

If we apply that keystone presumption to AI-based self-driving cars, it implies that the Ethical AI community is by-and-large assuming that self-driving cars will inevitably be devised and fielded. They are taking at face value that self-driving cars will emerge, whereas the counterclaim would be that they should not be making that hidden assumption and instead be chewing on that perhaps we ought to not have AI self-driving cars (or some variant such as only have them in special circumstances rather than on a widespread basis).

It’s a great question.

To clarify and restate, the assertion than being that the predominant Ethical AI considerations about AI self-driving cars are missing the boat by apparently assuming that AI self-driving cars are inevitable. In that way of thinking, the expectation would be exhibited twofold:

1.      Few if any examinations of whether we should even have AI self-driving cars would exist, or would be shunted to the side, or have no breath of air to get any shining light of attention, and

2.      The likely focus on AI self-driving cars would instead be on the details surrounding the ethical AI ramifications of the traditional list of factors, such as explainability, fairness, privacy, etc., and altogether doing so under the general guise or silent presumption that we will decidedly have those vaunted autonomous vehicles (with no other option being widely entertained).

An interesting proposition.

Even a casual inspection of the general attention to such issues relating to AI self-driving cars would showcase that there is an assumption that AI self-driving cars are somewhat inevitable, though let’s make sure that a huge caveat and double star footnoting cautionary indicator goes along with that sweeping statement.

Here’s why.

We don’t know as yet whether AI-based true self-driving cars will really be entirely possible.

Sure, there are narrowly devised public-roadway tryouts underway. These are absolutely not at Level 5. Nobody can reasonably argue to the contrary on that undeniable point. You can say that some of the proper tryouts are kind of veering into Level 4, but wobbly so and with relatively brittle ODDs (ODDs or Operation Design Domains are the stipulations of where a self-driving car maker states that their autonomous vehicle can safely operate, see my discussion at this link here).

That being said, the self-driving car arena is much further along than it was just a few years ago. There is reasonable cause for being upbeat about what will happen next. The biggest sticking point is the timetable of when this will play out. Previous predictions by the pundits have readily come and gone. You can expect the existing predictions to assuredly suffer the same fate.

In short, it would seem that you could informally say that there is a sufficient rationale for assuming that AI-based true self-driving cars are going to eventually arise. It would seem that we are heading toward Level 4 amidst very constrained ODDs. The next logical expansion would seem to involve having more ODDs, ones of greater variety and depth. At some juncture whereby we’ve conquered enough overlapping and somewhat exhaustive ODD spheres, this would seem to culminate into venturing gingerly into Level 5. That’s at least the stepwise philosophy of anticipated progress (which not everyone embraces).

I mention those seemingly complex contortions because there is much confusion about what it means to proclaim that AI-based true self-driving cars exist or that we have arrived at the grand moment of their existence. What is your definition of AI-based true self-driving cars? If you say it is the moment you’ve edged teensy-weensy into Level 4, well, I guess you can start popping those champagne bottles. If you say it is once we’ve turned the corner on Level 4, you are going to need to put those champagne bottles in storage. If you say it is when we’ve mastered Level 5, those bottles are going into the back of the storage shelves and you should expect them to get dusty.

All in all, if society is in fact making an assumption that AI self-driving cars are inevitable, you could reasonably argue that this is an aboveboard assumption. Not everyone would agree to that contention, so for those of you that are upset at the alleged possibility, your vehement disagreement is so noted.

Are we potentially getting ourselves into trouble or dire straits by generally going along with that assumption of inevitability for AI self-driving cars?

There are intriguing twists and turns that this enigma brings to the surface.

For example, one already oft-discussed concern is that we might have AI-based self-driving cars that are exclusively available in some areas but not in others. The areas that will have self-driving cars are the “have’s” while the areas that don’t have them are the “have not’s” that are essentially precluded from leveraging the benefits of self-driving cars.

This is the proclaimed elitist or inequity phenomenological worry that many have expressed. Indeed, I’ve covered this many times, such as at this link here and in this larger analysis at the link here.

The idea is that AI self-driving cars will likely be operated in fleets. The fleet operators will choose where to deploy their self-driving cars. The presumed money-making place to use the self-driving cars is in the wealthier parts of a town or city. Those living in the poor or impoverished parts of a town or city won’t have ready access to self-driving cars.

The same logic is extrapolated to contend that countries will similarly find themselves also in the same predicament. Wealthier nations will experience the use of self-driving cars, while poorer nations will not. The have’s will have more access and use of autonomous vehicles, the have not’s will not.

You might be wondering what the have’s are getting that the have not’s are not getting. A frequent indication is that there is an expectation that AI driverless cars will have many fewer car crashes and car collisions. The belief is that the number of consequent human injuries and fatalities due to car crashes will diminish dramatically. As such, the wealthier parts of a town or city, or the wealthier nations, will be able to curtail their car-related injuries and fatalities, while the human-driven regions that don’t have the option or replacement via AI-based self-driving cars will not see any such commensurate reduction.

Please know that there are other touted benefits of AI self-driving cars that would similarly be accrued in the have’s and not be incurred in the have not’s (see more at this link here).

The gist here is that you can stretch the “are self-driving cars inevitable” by pointing out that Ethical AI is already handwringing about self-driving cars only being selectively available. I know it might seem like a pretzel of logic. What this is saying is that we are already notably concerned about AI not being available in some societal segments. In that stretch of thinking, this instantiation of AI is not inevitable with respect to those contexts per se (if you get the drift).

Conclusion

You are undoubtedly familiar with the cleverly devised sage advice that what you don’t know can harm you, while what you don’t know that you don’t know can really wipe you out.

One viewpoint about the Radical Ethical AI posture is that despite any naming-related consternation, we should at least hear what it is believed that we are missing out on. Maybe there is a groupthink taking place and so-called conventional Ethical AI or old-fashioned AI Ethics has managed to become stagnant and stuck on particular considerations. If so, a kick in the posterior might aid in getting the engines running again and moving toward a widened scope.

Or, it could be that Ethical AI and AI Ethics are doing fine, thank you very much, and this offshoot is just trying to be a fly in the ointment. The downside doesn’t seem especially disruptive or disconcerting. When or if things get somehow out of hand, a devout and concerted reassessment of the realm might be of insightful merit.

Continuous improvement is a laudable aspiration. And, you can nearly inarguably state that when it comes to the advent of AI, we don’t want to belatedly discover that we didn’t know what we didn’t know, and that we should have known to be on top of the aspects that we didn’t know (or, that we knew them, somewhat, and inadvertently suppressed them).

That doesn’t seem overly radical, does it?

Source: https://www.forbes.com/sites/lanceeliot/2022/02/20/pushing-the-boundaries-of-ai-ethics-into-the-topsy-turvy-world-of-radical-ethical-ai-potentially-exemplified-via-the-use-case-of-vaunted-ai-based-self-driving-cars/