Do you perchance know someone that seems to be particularly accident-prone?
Seems like we all do.
Perhaps the person is that type that is apt to slip on a banana peel or knock over a precious vase in your living room. They are a human magnet for accidents of one kind or another. It could be so bad that you are even loath to be near them at times. You might get the ominous overflow or suffer the inglorious aftermath of one of their unsavory accidents.
But maybe we are being overly harsh when suggesting that anyone is more so predisposed to accidents than others are. An argument could be made that accidents can happen to any of us. We are all subject to the possibility of committing or getting embroiled in an accident. The best of us included.
Let’s pile on top of this discussion an additionally vexing question.
Are accidents inevitable?
In other words, no matter how hard we might try to prevent accidents, it could inextricably be that nonetheless there is still a chance of and an ultimate certainty that accidents will occur. You can seek to protect the world from accidents. That seems indubitably prudent. But one way or another, accidents will still rear their ugly head.
As they say, accidents are waiting to happen.
It might be helpful to clarify what is meant by referring to some event as being appropriately labeled as an accident. The usual dictionary definition is that an accident is a type of incident that occurs unexpectedly and unintentionally, for which there is an unfortunate result that consists of a semblance of damage or injury.
Mindfully unpack that definition.
The said incident is unexpected. This implies that we did not seem to realize that the accident itself would arise.
The said incident is unintentional. This suggests that we should rule out the circumstances wherein somebody intentionally sought to have the incident occur. If a prankster places a banana peel on the floor where they know an unlucky and unsuspecting innocent will step, you would be hard-pressed to assert that the person tripping over it had suffered an accident. They were instead tricked and insidiously led into a trap.
The definition also includes the criterion that the result is unfortunate. An accident in this light must lead to a sour outcome. The person that accidentally knocked over a vase has cracked and possibly damaged the relished item beyond repair. The owner of the vase is harmed by the loss of value. The person that ran into the vase might now owe the owner for the loss. Heaven forbid that anyone might have gotten cut or scrapped by the breaking of the vase.
For sake of equal balance, we might want to note that there are also so-called “good” accidents. A person could find themselves in great fortune or accrue some other vital benefit due to the result of an accident. One of the most oft-cited examples consists of Sir Alexander Fleming and his acclaimed discovery of penicillin. The story goes that he was a bit careless in his laboratory and upon returning from a two-week vacation he found a mold on one of his culture plates. Reportedly, he said this about the matter: “One sometimes finds what one is not looking for. When I woke up just after dawn on Sept. 28, 1928, I certainly didn’t plan to revolutionize all medicine by discovering the world’s first antibiotic, or bacteria killer. But I guess that was exactly what I did.”
We shall set aside the favorable accidents and focus herein on the dismal accidents. The frown face version of accidents is where those adverse outcomes can be especially life-threatening or have onerous results. As much as possible, the downside accidents we want to minimize (and, of course, the upside accidents we’d like to maximize, if that is feasible to do, though I’ll cover that smiley face variant in a later column).
I’d like to slightly reword the earlier question about the inevitability of accidents. So far, we have kept our attention on accidents that occur in the particular instance of a singular person. There is no doubt that accidents can also impact a multitude of people all at once. This can be particularly encountered when people are immersed in a complex system of one sort or another.
Get yourself ready for a variant of the previously floated question.
Are system accidents inevitable?
We should mull this over.
Suppose a factory floor is laid out to make parts for cars. Those that designed the factory are let’s say extremely concerned about worker accidents that might occur. Factory workers are required to wear helmets at all times. Signs in the factory exhort to watch out for accidents and be mindful in your work. All manner of precautions is taken to avert accidents from happening.
In this system, we might hope that nobody will ever incur an accident. Do you believe that there is zero chance of an accident happening? I would dare suggest that no reasonable thinker would bet that the chance of an accident is zero in this case. The odds might be really low of an accident happening, yet we still know and assume that despite all the precautions there is still a modicum of risk that an accident will take place.
All of this points to the idea that in a system of sufficient complexity we are bound to believe that accidents will still occur, regardless of how hard we try to prevent them. We are reluctantly backing ourselves into the stipulation that system accidents are indeed inevitable. A grandiose statement of this caliber might have a caveat that the system would have to be of some threshold of complexity such that it is essentially impossible to cover all the bases to prevent accidents totally.
You have now been neatly step-by-step introduced to a broadly outlined theory about accidents that can be labeled as Normal Accidents or the Normal Accident Theory (NAT). Here is a handy description by researchers that have examined this notion: “At a large enough scale, any system will produce ‘normal accidents’. These are unavoidable accidents caused by a combination of complexity, coupling between components, and potential harm. A normal accident is different from more common component failure accidents in that the events and interactions leading to normal accident are not comprehensible to the operators of the system” (as stated in “Understanding and Avoiding AI Failures: A Practical Guide” by Robert Williams and Roman Yampolskiy, Philosophies journal).
The reason I’ve brought you to the land of so-called normal accidents is that we might need to carefully apply this theory to something that is gradually and inevitably becoming ubiquitous in our society, namely the advent of Artificial Intelligence (AI).
Let’s dig into this.
Some people falsely assume that AI is going to be perfection. AI systems will not make mistakes and will not get us into trouble, the speculation goes. All you need to do is make sure that those AI developers do a good enough job, and voila, AI will never do anything that could be construed as accidental or begetting an accident.
Not so fast on that hypothetical belief. If you are willing to buy into the theory of normal accidents, any AI of any substantive complexity is going to inevitably bring about an accident. Regardless of how much late-night tinkering those AI developers do to prevent an accident, the AI will surely at some point in time be embroiled in an accident. That is the way the cookie crumbles. And there’s no respite by crying in our spilled milk about it.
Ponder the mashup of AI and the conceptual tenets of normal accidents.
Envision that we have an AI system that controls nuclear weapons. The AI has been carefully crafted. Every conceivable check and balance has been coded into the AI system. Are we safe from an AI-based accident that might occur? Those that support the normal accidents viewpoint would say that we are not quite as safe as might be assumed. Given that the AI is likely to be especially complex, a normal accident is silently waiting in there to someday emerge, perhaps at the worst possible moment.
The gist of those thorny questions and qualms is that we must be on our toes that AI is bound to be accident-laden, and humankind has to do something sensible and proactive about the dangers that can ensue. As you will see in a moment or two, this is a looming consideration when it comes to using AI, and the field of AI Ethics and Ethical AI is wrestling quite a bit about what to do. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Before we go down a rabbit hole, let’s make sure we are on the same page about the nature of AI. There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human giving you advice. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here). All told these scenarios would ratchet up the assessment of the source.
Let’s keep things more down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverages computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).
All told, a general hope is that by establishing a sense of AI Ethics precepts we will at least be able to increase societal awareness of what AI can both beneficially do and can adversely produce too. I’ve extensively discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
We might need to add to the vaunted AI Ethics listings that we need to explicitly be considering and taking overt action to prevent or at least stridently mitigate AI-based normal accidents that might occur. Those that develop AI need to do their best on that front. Those that deploy AI need to do likewise. Those that use or are somehow subject to AI should be wary and on their guard for the possibility of AI accidents that seemingly are going to arise.
You might be tempted to think that enough precautions can be built into the AI that the chances of an accident should drop to zero. For those that are techies, the usual hubris is that if a piece of tech can generate a problem, another piece of tech can surely solve the problem. Just keep tossing more and more tech until the problem goes away.
Well, those that have studied systems-oriented accidents would tend to disagree and politely retort to the presumed techie pomposity by proffering a standpoint known as the Swiss Cheese Model (SCM): “In the SCM, the layers of safety are modeled as slices of cheese with holes in them representing weak points in each layer of safety. Over time, holes change shape and move around. Eventually, no matter how many slices of cheese (layers of safety) there are, the holes will align allowing a straight shot through all of the slices of cheese (an accident occurs)” (per the earlier cited paper by Robert Williams and Roman Yampolskiy).
I don’t want to get bogged down into this side tangent about whether or not there is a guaranteed way to program AI to entirely and always avert any chance of an accident. There is all manner of mathematical and computational proving approaches that are being tried. I think it is reasonable and fair to declare that we do not today have a scalable working method nor technologies that can ironclad guarantee such a zero chance, plus we definitely are saddled with tons upon tons of AI that is being pell-mell produced that we know for sure has not sought to abide by such practices. That latter point is crucial since even if we can concoct something in an AI lab, scaling that to the zillions of wild and carefree AI efforts underway and that will continue to emerge is a knotty issue and not likely solved even if a computational proving machine silver bullet existed.
Another point that I believe merits a brief mention consists of AI that is turned into doing untoward acts as a result of malicious human actors. I am not going to place those instances into the realm of AI accidents. Remember that the opening discussion suggested that the dictionary definition of an accident was an incident of an unintentional nature. If a human cybercrook manages to get an AI system to do bad things, I don’t classify that AI as experiencing an accident. I trust that you will go along with that presumption.
An interesting question comes up as to how much of various AI untoward actions can be attributed to purely an AI accident versus a cybercriminal’s devious act. According to some of the existing AI incidents reporting databases, it appears that the AI accidents happen more so than the maliciously spurred incidents, though you have to take that notion with a hefty dose of salt. I say this because there is a great temptation to not report when an AI system was attacked, and perhaps be somewhat more willing to report when an AI accident occurs.
There is an extremely important caveat we need to discuss concerning AI accidents.
Using the catchphrase “AI accidents” is generally undesirable and going to create quite a mess for us all, meaning for all of society. When a human perchance has an accident, we often shrug our shoulders and are sympathetic to the person that had the accident. We seem to treat the word “accident” as though it means that no one is responsible for what happened.
Let’s take the example of getting into a car accident. One car swings wide at a right turn and accidentally rams into another car that was going straight ahead. Shucks, it was just an accident and accidentally happened. Those that weren’t involved in the incident are perhaps going to let the matter slide if the event is couched in terms of merely being an accident that happened.
I have a feeling though that if you were in the car that got struck, you would not be so sympathetic to the driver that made the overly wide turn. Your opinion would certainly be that the other driver was a lousy driver and that either an ostensibly illegal or unwise driving act led to the car crash. By labeling the incident as an “accident” the driver that was stuck is now at a somewhat disadvantage since the appearance is that it all occurred by happenstance alone, rather than via the hands of the driver that messed up.
In fact, the word “accident” is so filled with varying connotations that by and large, the government statistics on car crashes refer to the matter as car collisions or car crashes, rather than making use of the phrase car accidents. A car collision or a car crash does not seem to have any implications about how the incident came to be. Meanwhile, the phrasing of a “car accident” almost leads us to think that it was a quirk of fate or somehow outside the hands of humankind.
You can abundantly see how this connotational consideration comes to play when referring to AI accidents. We don’t want AI developers to hide behind the connotational shield that the AI just accidentally caused someone harm. The same goes for those that deploy AI. You could argue that the phrasing of “AI accidents” is almost an anthropomorphizing of AI that will mislead society into allowing the humans that were behind the scenes of the AI to escape accountability. For my discussion about the rising importance of holding humans responsible for their AI, see my discussion at this link here and this link here.
I am going to henceforth herein use the catchphrase of AI accidents, but I do so reluctantly and only because it is the conventional way of referring to this phenomenon. Attempts to word this differently tend to be regrettably more bloated and not as easily read. Please make sure to interpret the catchphrase in a manner that does not cause you to look the other way and fail to realize that the humans underlying the AI are culpable when AI goes awry.
To help illustrate the likely confusion or misleading aspects of referring to AI as incurring accidents we can return to my remarks about knocking over a vase. Consider this example of AI doing so: “This problem has to do with things that are done by accident or indifference by the AI. A cleaning robot knocking over a vase is one example of this. Complex environments have so many kinds of ‘vases’ that we are unlikely to be able to program in a penalty for all side effects” (per the paper by Robert Williams and Roman Yampolskiy).
An AI system that is put into use in a household and then “accidentally” knocks over a vase would seem to suggest that nobody ought to be blamed for this adverse action of the AI. It was just an accident, one might lamentedly decry. On the other hand, we should rightfully ask why the AI system wasn’t programmed to handle the vase circumstance in general. Even if the AI developers didn’t anticipate a vase per se as being within the scope of the objects that could be encountered, we can certainly question why there wasn’t some overarching object avoidance that would have kept the AI system from knocking over the vase (thus, the AI might not admittedly classify the vase as a vase, but still could have avoided it as a detectable object to be avoided).
I have predicted and continue to predict that we are gradually heading toward a humongous legal battle over the emergence of AI systems that get themselves into “AI accidents” and cause some form of harm. Up until now, society has not reacted in any sizable way to legally pushback at AI that is shoveled out into the marketplace and produces adverse consequences, either by intent or unintentionally. Today’s AI bandwagon that has become a goldrush of half-baked AI makers and those that hurriedly enact the AI deployment are getting lucky right now and remain relatively untouched by civil lawsuits and criminal prosecutions.
The AI-stoked legal backlash is coming to come, sooner or later.
Moving on, how are we to try and cope with the inevitability of so-called AI accidents?
One thing we can do right away is try to anticipate how AI accidents might occur. By anticipating the AI accidents, we can at least seek to devise means to curtail them or minimize their likelihood of occurring. Furthermore, we can attempt to put in place guardrails so that when an AI accident does happen, the chances of demonstrative harm are lessened.
A helpful set of factors that were described in the earlier cited research article Understanding and Avoiding AI Failures: A Practical Guide includes these properties (as quoted from the research paper):
- The system which is affected by the outputs of the AI.
- Time delay between AI outputs and the larger system, system observability, level of human attention, and ability of operators to correct for malfunctioning of the AI.
- The maximum damage possible by malicious use of the systems the AI controls.
- Coupling of the components in proximity to the AI and complexity of interactions.
- Knowledge gap of AI and other technologies used and the energy level of the system.
At this juncture of this hefty discussion, I’d bet that you are desirous of some illustrative examples that might further elucidate the AI accidents topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the advent of so-called AI accidents and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Incidents
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the advent of so-called AI accidents.
Envision that an AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.
Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.
Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.
That’s something we might all need to get accustomed to, rightfully or wrongly.
Back to our tale.
One day, the self-driving car gets into an accident.
While making a right turn, the AI driving system swung the autonomous vehicle widely and a human-driven car was struck. The human-driven car was proceeding straight ahead in the proper lane of traffic. There was no particular opportunity for the human driver to swerve or avoid getting hit. In addition, there was no warning or indication by the self-driving car that it was going to widely make the right turn.
Is this an accident?
We can certainly say that it is encompassed within the rubric of being an AI accident. The basis for such an assertion is that there was an AI driving system at the wheel of the self-driving car. Somehow, for whatever reasons, the AI opted to make a wide swing when taking a right turn. The result led to the self-driving car hitting a human-driven car.
Recall the earlier discussion about the connotations associated with the word “accident” and see how such undertones come to play in this scenario. Also, remember that we discussed the case of a human driver making a wide right turn and running into a fellow human-driven car. We realized that the notion of this act being an “accident” is misleading and confounding. The human driver that made the wide swing could hide behind the idea that an accident merely occurred that was seemingly by happenstance or the vagaries of fate.
Instead of labeling the scenario as an “AI accident” in the case of the AI-based self-driving car going wide and hitting the human-driven car, perhaps we should say that it was a car crash or car collision involving a self-driving car and a human-driven car. We can then dispense with the vacuous confusion of it being an accident of unknowable means.
What do you think the public reaction to the incident would be?
Well, if the automaker or self-driving tech firm can stick with the labeling of the matter as an accident, they might be able to skirt around the potential backlash from the community at large. A sense of sympathy about accidents all told would possibly flow over onto the particular circumstance. For more about how cities, counties, and state leaders will potentially react to AI autonomous vehicle incidents, see the discussion of a Harvard study that I co-led and as described at the link here.
If the situation is plainly described as a car crash or car collision, perhaps that might then allow for the realization that someone or something is perhaps to blame for the incident. A knee-jerk reaction might be that the AI is to be held responsible. The thing is, until or if we ever decide to anoint AI as having a semblance of legal personhood, you are not going to be able to pin the responsibility on the AI per se (see my discussion on AI and legal personhood at the link here).
We can inspect the AI driving system to try and figure out what led to the seeming inappropriate driving and the subsequent car crash. That doesn’t though imply that the AI is going to be held accountable. The responsible parties include the AI developers, the fleet operator of the self-driving car, and others. I include others too since there is a possibility that the city might be held partially responsible for the design of the corner where the turn took place. In addition, suppose a pedestrian was darting off the corner and the AI driving system opted to avoid the person and yet then got enmeshed into the car crash.
And so on.
Conclusion
We would want to know what the AI was computationally calculating and what it had been programmed to do. Did the AI do as was coded? Maybe the AI encountered a bug or error in the programming, which doesn’t excuse the actions but provides more of a clue to how the crash came about.
What kind of AI guardrails were programmed into the AI driving system? If there were guardrails, we would want to figure out why they seemed to not prevent the car crash. Maybe the AI driving system could have come to a halt rather than making the turn. We would want to know what alternatives the AI computationally assessed during the course of the incident.
Besides getting to the bottom of the particular incident, another rightful qualm is whether the AI driving system has a flaw or other embedded aspect that will do similar kinds of adverse acts. In essence, this incident might be a telltale indicator of more to come. How were computer-based simulations of driving situations used to try and anticipate this type of AI driving system prompting? Was there sufficient roadway driving tests to have ferreted out the AI issues that might have led to the car crash?
This scenario highlights a contentious conundrum that is facing the emergence of AI-based self-driving cars.
It goes like this.
On the one hand, there is a societal desire to adopt self-driving cars expeditiously due to the hope that AI driving systems will be as safe or possibly safer than human drivers. In the United States alone, we currently have nearly 40,000 human fatalities annually due to car crashes and about 2.5 million human injuries. Analyses suggest that a sizable portion of those car crashes are attributable to human error such as driving while intoxicated, driving while distracted, etc. (see my assessment of such stats at the link here).
AI driving systems won’t drink and drive. They won’t need rest and they won’t get worn out while at the wheel. The assumption is that by establishing self-driving cars as a viable mode of transportation, we can reduce the number of human drivers. This in turn should mean that we’ll reduce summarily the number of annual human fatalities and injuries from car crashes.
Some pundits have said that we will end up with zero fatalities and zero injuries, and those self-driving cars will supposedly be uncrashable, but this is an altogether absurd and utterly false set of expectations. I’ve explained why this is so disingenuous at the link here.
In any case, assume that we are going to have some amount of car crashes that self-driving cars get involved in. Assume too that those car crashes will have some amount of fatalities and injuries. The question that is being agonized over is whether we as a society are willing to tolerate any such instances at all. Some say that if even one fatality or one injury happens as a result of true self-driving cars, the whole kit and kaboodle should be closed down.
The countervailing viewpoint is that if the lives saved are reducing the annual counts, we should be continuing to encourage the advent of self-driving cars and not react in such an illogical manner. We will need to accept the premise that some amount of fatalities and injuries will still exist, even with self-driving cars, and yet realize that the annual count if being decreased suggests that we are on the right path.
Of course, some argue that we should not have self-driving cars on our public roadways until they are either cleared for such use as a result of extensive and exhaustive computer-based simulations or as via private closed-track testing. The counterargument is that the only viable and fastest way to get self-driving cars going is by using public roadways and that any delays in adopting self-driving cars are going to otherwise allow the horrendous counts of the human-driven car crashing to continue. I’ve covered in greater length this debate in my columns and urge readers to look at those discussions to get the full sense of the perspectives on this controversial matter.
Let’s wrap things up for now.
AI accidents are going to happen. We must resist the impulse to construe an AI accident as seemingly accidental and ergo falsely let the makers and those that deploy AI be categorically off-the-hook.
There is an added twist that I leave you with as a final intriguing thought for your day.
The comedian Dane Cook reportedly told this joke about car accidents: “A couple of days back, I got into a car accident. Not my fault. Even if it’s not your fault, the other person gets out of their car and looks at you like it’s your fault: Why did you stop at a red light and let me hit you doing 80!”
Where the twist comes to the fore is the possibility that an AI system might opt to insist that when an AI accident occurs involving that particular AI, the AI touts that the incident was the fault of the human and assuredly not the fault of the AI. By the way, this could be abundantly true and the human might be trying to scapegoat the AI by claiming that it was the fault of the AI.
Or maybe the AI is trying to scapegoat the human.
You see, the AI that we devise might be tricky that way, either by accident or not.
Source: https://www.forbes.com/sites/lanceeliot/2022/04/28/ai-ethics-wrestling-with-the-inevitably-of-ai-accidents-which-looms-over-autonomous-self-driving-cars-too/