Aristotle famously stated that educating the mind without educating the heart is no education at all.
You could interpret that insightful remark to suggest that learning about ethics and moral behavior is keenly vital for humankind. In the classic nature versus nurture debate, one must ask how much of our ethical mores are instinctively native while how much is learned over the course of our living days. Toddlers are observant of fellow humans and presumably glean their ethical foundations based on what they see and hear. The same can be said of teenagers. For open-minded adults, they too will continue to adjust and progress in their ethical thinking as a result of experiencing the everyday world.
Of course, explicitly teaching someone about ethics is also par for the course. People are bound to learn about ethical ways via attending classes on the topic or perhaps by going to events and practices of interest to them. Ethical values can be plainly identified and shared as a means to aid others in formulating their own structure of ethics. In addition, ethics might be subtly hidden within stories or other instructional modes that ultimately carry a message of what ethical behavior consists of.
That’s how humans seem to imbue ethics.
What about Artificial Intelligence (AI)?
I realize such a question might seem oddish. We certainly expect humans to incorporate ethics and walk through life with some semblance of a moral code. It is a simple and obvious fact. On the other hand, a machine or computer does not seem to fit within that same frame of reference. Your gut reaction might be that it is farfetched or outlandish to consider AI as having an embodiment of ethics and moral codes.
The best that we would seem to be able to do about AI is devise it so that it does not deviate from ethical behavior. AI developers and those fielding AI are to be held responsible for ensuring that the AI as designed and when implemented is already in conformance with ethical precepts. Out the gate, so to speak, the AI ought to already be pristine and ready to go as a fully ethically proper contrivance.
You would assuredly be right in thinking that AI systems should indeed be crafted to already fit entirely within an ethical basis. Society was quite excited when the latest wave of AI systems was first released and appeared to demonstrate that we were in an era of AI For Good. AI would aid in solving many of the world’s most challenging problems. Advances in technology were being harnessed to supplement human capabilities with cognitive-like facilities, though allow me to emphasize that we do not yet have any sentient AI and we don’t know if sentient AI will be attained.
The problem with the pell-mell rush to get AI into the world has gradually revealed the ugly underbelly of AI known as AI For Bad. There have been lots of headlines about AI systems that make use of algorithmic decision making (ADM) that is replete with biases and inequities. On top of that, much of the contemporary AI suffers from a lack of transparency, tends to be inexplicable in terms of explaining computational decisions, frequently exhibits a lack of fairness, and has allowed some to divert their human accountability by pointing fingers at AI.
I’ve been extensively covering Ethical AI and the ethics of AI in my writings, including the link here and the link here, just to name a few.
How can there be AI For Bad if we take as a stated construct that AI ought to be crafted from the beginning to avoid unethical actions?
The answer is multi-fold.
First, many AI developers and companies fielding AI are themselves clueless about the importance of shaping their AI to stay within ethical boundaries. The concept isn’t at all on their radar. The allure of making fast money causes some to push ahead on whatever wild AI notions they wish to willfully produce. No need to figure out any ethical stuff. Just build the AI and get it underway.
Secondly, there are those making AI that embellish an outright awareness of the ethical ramifications, but they overtly downplay or somewhat ignore the Ethical AI considerations. One common perspective is the techie classic mantra of aiming to fail fast and fail often. Just keep iterating until things hopefully get suitably settled. The chances of squeezing any systematic and thoughtful incorporation of AI ethics into those rapid-fire AI efforts is regrettably slim. I discuss the need for the empowerment of leadership toward Ethical AI at the link here.
Thirdly, a lot of murkiness exists about what ethical guardrails should be entertained when devising AI. Sure, there are nowadays many AI ethics guidelines, see my coverage at the link here, though these handy theoretical precepts are hard to necessarily turn into specifics for a given AI system being built. I’ve indicated that we will slowly see an emergence of AI building tools and methodologies that include Ethical AI coding practices, helping to close the gap between the abstract aspects and the proverbial rubber-meets-the-road facets.
Fourthly, per the emphasis herein, we explore the affected case of AI that even if initially composed within ethical boundaries then subsequently while in use meanders beyond the assumed ethically encoded parameters.
We need to unpack that.
Much of today’s AI makes use of Machine Learning (ML) and Deep Learning (DL). These are computational pattern matching techniques and technologies. Generally, the idea is that you collect together lots of relevant data to whatever the AI is supposed to be able to do, you feed that data into the chosen computational pattern matcher, and the pattern matching mathematically tries to find useful patterns. Note that there isn’t any sentience on the part of this AI (which, again, no such thing yet exists). Nor is there any common-sense reasoning involved. It is all math and computations.
It could be that the data fed into the ML/DL is already infused with biases and inequities. In that case, the odds are that the computational pattern matching will merely mimic the same proclivities. If you provide data that favors one race over another or favors one gender over the other, there is a sizable chance that the pattern matching will latch onto that as the discovered pattern.
A big problem with that kind of latching is that we might have a difficult time ferreting out that the patterns are based on that aspect of the data. The thorny and complex mathematics can make the surfacing of such found patterns quite problematic. Even testing the AI is not necessarily going to reveal those tendencies, depending upon the range and depth of tests that are applied.
So, let’s assume that you’ve built an AI system and did your homework by first trying to avoid using data that had preexisting biases. Next, once the Machine Learning and Deep Learning had been undertaken, you tested the results to see if any biases or inequities somehow arose. Let’s assume that you are unable to find any such untoward inclinations.
All told, the green light is now given to go ahead and put the AI into use. People will start using the AI and likely assume that it is ethically proper. The developers think this too. The company fielding the AI thinks this. Away we all go.
Here’s what can happen.
An inkling of a bias that wasn’t found in the original data and that wasn’t caught during the testing of the AI is perchance activated. Perhaps this only happens rarely. You might believe that as long as it is rare, all is well. I doubt though that those vulnerable to the said bias are willing to see things that way. I dare say that the AI system and those that formulated it are going to confront repercussions, either in the legal courts or in the open-ended court of societal opinion.
Another variation is the proverbial notion of taking an inch and grabbing a mile. The inkling might initially be tiny. During the use of the AI, the AI might have been devised to alter itself as things go along. This kind of “learning” or “self-learning” can be quite useful. Rather than requiring human-AI developers to continually modify and adjust the AI system, the AI is built to do so by itself. No delays, no expensive labor, etc.
The downside of this handy self-adjustment is that the inkling can get elevated to being larger and larger within the scope of usage by the AI. Whereas the bias might have been in a tight little corner, it now is given room to expand. The AI has no semblance that this is “wrong” and merely computationally extends something that seems to be working.
If that makes the hair stand on your head, you’ll need to sit down for the next variant.
Suppose the bias wasn’t existent from the get-go and we have every reasoned belief that the AI is entirely bias-free. We either got lucky or perchance systematically made sure that no biases were anywhere in the data and none arose via the computational pattern matching. Despite that sigh of relief, the AI is allowed to adjust while in use. Pandora’s door is opened and the AI opts to computationally gravitate toward biases that are found during whatever it is the AI does.
A newfound bias gets gobbled up into the AI, and no one is particularly the wiser that it has happened. Yikes, we have created a monster, a veritable Frankenstein.
How can this emergence be possibly prevented or at least flagged?
One approach that is gaining traction consists of building into the AI an ethics gleaning component. The AI is constructed to include Ethical AI elements. Those elements then watch or monitor the rest of the AI while the AI is adjusting over time. When the AI appears to have gone beyond the programmed ethical precepts, the Ethical AI tries to rope in those adjustments or alerts the developers that something has gone amiss.
You can try programming this Ethical AI overseeing capacity and hope that it will prevail while the AI is in use.
Another somewhat controversial angle would be to use Machine Learning and Deep Learning to train the Ethical AI aspects into the AI system.
Say what?
Yes, the perhaps unorthodox concept is that instead of a human programmer directly encoding a set of AI ethics precepts, the AI is shaped to try and “learn” them instead. Recall that I briefly noted that the use of ML/DL typically relies upon feeding data into the algorithms and a computational pattern matching takes place. The million-dollar question is whether we can use the same mechanism to imbue ethical values into an AI system.
I suppose you could liken this to my discussion at the opening of how humans become aware of ethical principles, though please do not anthropomorphize today’s AI as being comparable to human thinking (it is not, and I’ll repeat that exhortation shortly). The AI can be programmed “innately” with ethical AI precepts. Or the AI could “learn” ethical AI precepts. You can do both, of course, which is something I’ve covered elsewhere, see the link here.
Take a moment to ponder the seemingly startling concept that AI might “learn” ethics and ergo presumably abide by ethical behaviors.
These researchers use an example of an AI system that figures out the desired temperature in a house to illustrate how this can work: “It first ‘observed’ the behavior of the people in various households for merely a week and drew conclusions about their preferences. It then used a motion-detecting sensor to determine whether anyone was at home. When the house was empty, the smart thermostat entered a high energy saving mode; when people were at home, the thermostat adjusted the temperature to fit their preferences. This thermostat clearly meets the two requirements of an ethics bot, albeit a very simple one. It assesses people’s preferences and imposes them on the controls of the heating and cooling system. One may ask what this has to do with social moral values. This thermostat enables people with differing values to have the temperature settings they prefer. The residents of the home do not need to reset the thermostat every day when coming and going. This simple ethics bot also reduces the total energy footprint of the community” (per the paper by Amitai Etzioni and Oren Etzioni entitled “AI Assisted Ethics” in the volume on Ethics and Information Technology).
Before I dig further into the twists and turns of having AI that “learns” ethical behavior, I would like to say something more about the status of AI.
AI can consist of these possible states:
1. Non-sentient plain-old AI of today
2. Sentient AI of human quality (we don’t have this as yet)
3. Sentient AI that is super-intelligent (a stretch beyond #2)
I am going to focus on the existing state which is non-sentient plain-old AI. Much of what you might read about Ethical AI at times cover the sentient AI and is therefore highly speculative. I say it is speculative because no one can pin the tail on the donkey of what sentient AI will be. Even further beyond the realm of human quality sentient AI is the much-ballyhooed super-intelligent AI. There are lots of sci-fi stories and qualms about how those flavors of AI might decide to enslave humankind, or maybe just wipe us all out. This is known as the existential risk of AI. At times, the dilemma is also phrased as the catastrophic risk of AI.
Some contend that we might be okay as long as we keep AI to the non-sentient plain-old AI that we have today. Let’s assume we cannot reach sentient AI. Imagine that no matter how hard we try to craft sentient AI, we fail at doing so. As well, assume for sake of discussion that sentient AI doesn’t arise by some mysterious spontaneous process.
Aren’t we then safe that this lesser caliber AI, which is the imagined only possible kind of AI, will be in use?
Not really.
Pretty much, the same general issues are likely to arise. I’m not suggesting that the AI “thinks” its way to wanting to destroy us. No, the ordinary non-sentient AI is merely placed into positions of power that get us mired in self-destruction. For example, we put non-sentient AI into weapons of mass destruction. These autonomous weapons are not able to think. At the same time, humans are not kept fully in the loop. As a result, the AI as a form of autonomous automation ends up inadvertently causing catastrophic results, either by a human command to do so, or by a bug or error, or by implanted evildoing, or by self-adjustments that lead matters down that ugly path, etc.
I would contend that the AI ethics problem exists for all three of those AI stipulated states, namely that we have AI ethical issues with non-sentient plain-old AI, and with sentient AI that is either merely human level or the outstretched AI that reaches the acclaimed superintelligence level.
Given that sobering pronouncement, we can assuredly debate the magnitude and difficulty associated with the ethical problems at each of the respective levels of AI. The customary viewpoint is that the AI ethics predicament is less insurmountable at the non-sentient AI, tougher at the sentient human-equal AI level, and a true head-scratcher at the sentient super-intelligent AI stage of affairs.
The better the AI becomes, the worse the AI ethics problem becomes.
Maybe that is an inviolable law of nature.
Returning to the focus on today’s AI, trying to have AI “learn” ethical behaviors via contemporary Machine Learning and Deep Learning is fraught with concerns and tricky problems. Suppose the AI fails to glean the desired ethical precepts? How will we know for sure that it faltered in doing so? Also, will other parts of an AI system potentially override the gleaned ethical constructs? Add to this that if the AI is adjusting on the fly, the adjustments could thin down the ethical aspects or inadvertently overwhelm them altogether.
To make matters worse, the “learning” might lead to the AI landing on truly unethical behaviors. Whereas we thought that we were doing the right thing by prodding the AI toward being ethical, it turns out that the AI slipped into pattern matching on the unethical aspects instead. Talk about shooting our own foot, it could absolutely happen.
At this juncture of this discussion, I’d bet that you are desirous of some additional real-world examples that could highlight how AI the “learning” of ethics might apply to today’s AI (other than the tasty teaser of the thermostat example).
I’m glad you asked.
There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the AI being able to “learn” Ethical AI precepts, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Ethical AI Inoculation
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and Ethical AI possibilities entailing the eyebrow-raising assertion that we can get AI to “learn” about ethical behaviors by itself.
Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.
Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.
Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.
That’s something we might all need to get accustomed to, rightfully or wrongly.
Back to our tale. One day, suppose a self-driving car in your town or city is driving along and comes upon a situation wherein a pedestrian is waiting to cross the road. Assume that the pedestrian does not have the right-of-way per se. A human-driven car could go past the pedestrian and be entirely legal in doing so. Likewise, the AI driving system is legally able to go past the waiting pedestrian.
Deciding whether to stop and let the pedestrian go across the street is entirely discretionary for the driver, regardless of whether being a human driver or an AI driving system.
I’m sure that you’ve encountered this kind of situation an innumerable number of times. Maybe you are in a hurry, so you don’t stop to let the pedestrian cross. On another occasion, you have plenty of time to get to your destination, thus you opt to stop and allow the waiting person to walk across the street. Your mood and the particular circumstances dictate what you will choose to do.
Nothing about this scenario seems unusual or vexing.
Before I examine the AI side of things, you might be interested to know that this particular aspect of discretion allowing a pedestrian to cross a street has been closely studied. Researchers have identified that at times the choice made by the driver can depend apparently on racial or gender biases. A human driver might size up the waiting pedestrian and opt to allow the person to cross as seemingly based on the inherent biases of the driver. Whether the driver even realizes they are doing so is a matter of ongoing research. See my coverage at this link here.
I have set the stage for our exploring what an AI driving system might do in the pedestrian crossing situation.
Conventional programming of the AI might entail that the AI developers decide to always have the AI driving system come to a stop and let the pedestrian cross. This would seem to be the ethically proper or civil thing to do. The self-driving car defers to the waiting human that wants to cross the street.
I dare say, if you were a passenger inside a self-driving car and the AI always stopped for all discretionary awaiting pedestrians, you might go nuts. Your quick trip to the grocery store might take many times longer to occur. Remember too that we are not referring to pedestrians that have the devout legal right of way to cross, since those cases would presumably already have the AI programmed to always allow. We are only focusing on the discretionary circumstances.
There are more downsides to this declaration of always stopping to let discretionary pedestrians cross the street.
Those that make and field AI-based self-driving cars want people to ride in them. The idea is that by having self-driving cars, we might reduce the number of annual car crashes, which currently produce about 40,000 annual fatalities and 2.5 million injuries in the United States alone, see my stats collection at this link here. Besides this revered societal goal, the automakers and self-driving tech makers hope to make money off their AI creations too, naturally so.
I bring this up because people might decide to not ride in self-driving cars if the AI driving system does things that unnecessarily end up delaying trips. Any everyday person would reason that by instead choosing a human driver the journey might be faster, and ergo selecting an AI self-driving car for a trip might get placed very low on the list of their choices. This in turn would mean that we would not have the sought reduction in car crashes and also that the makers would potentially find their wares to be unprofitable.
Given that set of arguments, you might be swayed to think that the AI should never stop when a discretionary instance of a pedestrian wanting to cross the street occurs. Just program the AI driving system to do whatever is strictly legal. If there is no legal requirement to let a pedestrian cross, then tough luck for that waiting pedestrian. Perhaps the person should make their way to a crossing point that does allow for a legal basis of the AI stopping the self-driving car.
Can you imagine the outcry on this?
People in your town or city discover gradually that AI self-driving cars will never allow a discretionary pedestrian to cross. That darned irascible AI! It is as though the AI is thumbing its nose at humans. An ill-mannered brat of a piece of no-good automation. To top this off, imagine that there are documented circumstances of pedestrians desperately needing to cross and the AI wouldn’t stop at all.
Meanwhile, human drivers were willingly stopping to let those “desperate” people safely get across the street.
As a result of this outrage, AI self-driving cars are no longer welcomed on the streets and byways of your locale. Permits that had been issued by the city leaders are revoked. Getting the ungrateful brutes off our roadways is the vocal clamor.
Alright, we seem to be between a rock and hard place. The AI shouldn’t always let the discretionary pedestrian cross (do not always stop). The AI shouldn’t always prevent a discretionary pedestrian from crossing (do not always zoom past). What to do?
The obvious answer would be to program the AI to act in a discretionary manner.
I ask you to contemplate the ADM (algorithmic decision making) that this ought to consist of. Will the AI try to detect the nature of the pedestrian and use the discerned characteristics as a basis for deciding whether to stop the self-driving car or not? Maybe someone older looking is the way to choose. But is that age discrimination in the making? And so on.
Perhaps the AI driving system is programmed to stop during daylight hours and never stop during nighttime. The logic possibly being that it is presumed safer for the riders in the self-driving car that the autonomous vehicle comes to a halt at daytime but not at the leerier hours of the evening time.
That maybe sounds sensible. Part of the problem will be the expectations of pedestrians. Here’s what I mean. Pedestrians see the AI self-driving cars stopping for discretionary crossings, which happen in daylight. The pedestrians do not know what criteria the AI is using to decide to stop. The assumption by some pedestrians is that the AI will always stop (not realizing that daylight versus nighttime is the true determiner). As a result, those pedestrians that believe the self-driving car will always stop are going to take a chance and start to cross when the AI driving system is not at all aiming to stop (which, the AI would likely come to a stop if the pedestrian is entering into the street, though this could be dicey and physics might preclude the AI from stopping the self-driving car insufficient time to avoid hitting the seemingly “errant” pedestrian).
Assume that the AI developers and the firms putting together and fielding the self-driving cars in your town are unsure of how to get the AI up-to-speed on this matter.
They decide to “train” the AI on data collected from throughout the locale. Turns out that there are plenty of city-mounted cameras that have captured the comings and goings of cars throughout the township. This data showcases many instances of pedestrians seeking to cross the street in a discretionary manner. All of the data is fed into a Machine Learning and Deep Learning system to derive what is considered customary in that jurisdiction.
Are we training the AI to do what local ethical mores showcase to be done?
In other words, if a given town had a local cultural more of tending to stop and let discretionary pedestrians cross as evidenced by human driver actions, the ML/DL would potentially pick up computationally on this pattern. The AI would then be trained to do likewise. At the other extreme, if the human drivers seldom stop, the AI would potentially get that “lesson” from computationally analyzing the data. The AI will do as humans do, kind of.
The assertion is that the ethical behaviors of humans are being captured in the data and that the AI is going to imbue those same ethical precepts by computational analysis. An ethicist would generally describe this as a communitarian approach to ethics. The shared values of the community are reflected in the efforts of the community at large.
This might seem like a dandy solution.
Unfortunately, there are lots of pitfalls.
One perhaps obvious problem is that the human drivers might already be exercising some form of biases in choosing to stop or not stop (as mentioned earlier). The AI will then be a copycat of these biases. Do we want that to be the case?
Consider another problem. Suppose that human drivers are not readily accepting of how the AI self-driving cars are operating. Just because human drivers were let’s say willing to stop, this might not be equally presumed for self-driving cars. It could be that human drivers get irked by the AI self-driving cars that keep stopping for discretionary pedestrians, even though the same happens by the hands of human drivers but that doesn’t seem to disturb human drivers.
Besides being disturbing, you can also conjure the possibility of human drivers inadvertently rear-ending self-driving cars. If a human driver wasn’t expecting the self-driving car to stop for a pedestrian, and if the human-driven car is directly behind the self-driving car, a dire mismatch of expectations can arise. The AI driving system brings the self-driving car to a stop. The human driver didn’t anticipate this action. The human driver slams into the self-driving car. Injuries and possibly fatalities ensue.
I purposely pointed out the chances of human harm.
The pedestrian crossing the street might seem at a quick glance as a trivial question. It would seem that nobody can get hurt by whichever method the AI chooses to stop or not stop. Wrong! There is a chance of the pedestrian getting run over. There is a chance of a human-driven car ramming into the self-driving car. The driver and passengers of the human-driven car can get hurt. The riders inside the self-driving car can get harmed. Additional permutations of possible human harm are readily envisioned.
Conclusion
Speaking of human harm, I’ll give you something else to get your minds roiling on this Ethical AI conundrum.
A news story reported that a man was driving a car into an intersection and had a green light to do so. Another human-driven car opted to run the red light of the intersection and illegally and unsafely went unimpeded into the intersection, threatening to strike the legally proceeding car.
The driver told reporters that he had to choose between taking the hit, or he could veer his car to hopefully avoid getting struck, but there were pedestrians nearby and the veering action could endanger those people. What would you do? You can choose to get hit by this looming car and maybe live to tell the tale. Or you can attempt to avoid getting hit but meanwhile possibly running down innocent pedestrians.
Much of our daily driving has those kinds of ethically enormous and instantaneous decisions to be made. I have discussed this at length, along with relating these life-or-death driving decisions to the famous or some say infamous Trolley Problem, see my elaboration at the link here.
Replace the human driver in this scenario with an AI driving system.
What do you want the AI to do?
That’s a perplexing question.
One approach is to program the AI to stridently do purely straight-ahead driving action, thusly not even computationally considering other options such as veering away from the likely crash. I would anticipate that riders in self-driving cars are going to be upset to find out that the AI was not devised to do anything other than taking the hit. You can expect lawsuits and an uproar.
Another approach would be to try and program the AI to consider the various possibilities. Who though gets to establish the ADM that decides which way the AI driving system would go? It would seem that allowing AI developers to make such weighty decisions on their own is fraught with abundant concerns.
You could try to let the AI “learn” from human driving data collected by day-to-day traffic situations. This is akin to the pedestrian crossing dilemma and the earlier suggested idea of using assembled data to have the AI glean whatever the local ethical mores seemed to be. There are lots of caveats about that, such as whether the ML/DL discovered patterns are apt or not, etc.
I had also forewarned that there is a chance of the AI gleaning perhaps unethical behavior, depending upon the ethical mores involved. For example, suppose the AI somehow landed on a computational pattern to always aim at the pedestrians whenever another car was threatening to strike the self-driving car.
Whoa, watch out pedestrians, you are going to be ready-made targets.
Time to close the discussion for now and do so with a heartening final thought.
You might be aware that Isaac Asimov proposed his “Three Laws Of Robotics” in 1950 and we are still to this day enamored of those proclaimed rules. Though the rules appear to be readily abideable, such as that an AI system or robot shall not harm a human and nor allow a human to come to harm, there are lots of crucial nuances that render this complex and at times untenable, see my analysis at the link here.
In any case, here’s something else that Asimov is also known for, though lesser so: “Never let your sense of morals get in the way of doing what’s right.”
For the Ethical AI that we all hope to devise, we have to keep in mind that the AI ethics imbued might not be what we anticipated and that somehow the AI will still need to do whatever is right. Asimov said it well. Long before, Aristotle seemed to be thinking quite similar sentiments.
Source: https://www.forbes.com/sites/lanceeliot/2022/03/20/ethical-ai-ambitiously-hoping-to-have-ai-learn-ethical-behavior-by-itself-such-as-the-case-with-ai-in-autonomous-self-driving-cars/