AI Ethics Welcomes The Prospects Of A Standardized AI Risk Management Framework, Which Could Bolster Autonomous Self-Driving Car Efforts Too

We seem to be told repeatedly that taking risks is important in life.

If you look at any everyday listing of quotable quotes there is a preponderance of life hacks telling you to embrace risk. Take a chance and climb out on a limb. Set aside your constraining worries and fly free with the wonderment of risk. Risk is your friend. Risk makes the world go around. Simply stated, no risk, no gain.

Though these glowing encouragements about being a risk-taker seem sensible, somehow the counterbalancing and sobering thoughts about the downsides of risk are left unsaid. Risk can put you in jeopardy. Dire harms can occur from risky actions. Simply stated, risk is not risk-free.

General George Patton famously asserted that we should always be risk-minded via taking sufficiently calculated risks, which he characterized as being quite different from being rash. Thus, think beforehand about the risks that you are willing to absorb. Be aware of known risks and the potential of unknown risks. A person has got to know their limitations when it comes to risk and risk-taking.

I am bringing up the nature and scope of “risk” as a type of analytically describable phenomenon in order to highlight that when it comes to AI there is an increasing need to determine how risky our expanding adoption of AI is. AI is not risk-free. On the contrary, AI presents many sizable and scarily massive risks that require us all to take a deep breath and start seriously calculating what those risks are. We must have our eyes wide open and know about AI risks as we plunge headfirst into the pell-mell rush toward embracing AI.

Please realize that all of today’s stakeholders are faced with AI risks.

For example, a firm that is crafting an AI system is taking a risk that the AI eventually could cause some form of substantive harm to those that use the AI system. The harm might be financial, psychological, or possibly physical harm that could injure or kill someone. Executives of the firm are likely to be held legally accountable for having devised and released the AI. AI developers that built the AI are bound to be held accountable. There are lots of hands that go into making and promulgating AI and they all can be considered jointly responsible and culpable for what they allowed to adversely occur.

Think of AI risk as something that floats along and attaches to all that have a touchpoint associated with the AI. Users of an AI system are taking on some amount of risk. They might be harmed by AI. Those that devised the AI are taking on some amount of risk associated with the harmful outcomes that their AI might produce. Risk is pervasive in the realm of AI and yet oftentimes seems completely neglected and generally woefully understated.

The bad news then is that not enough attention is going toward AI risks.

The good news is that a burgeoning appreciation of the vitalness of understanding and measuring AI risk is gaining speed. As a healthy sign of this awareness, we can take a look at the being formulated AI Risk Management Framework (RMF) that the National Institute of Standards and Technology (NIST) is undertaking. I’ll be quoting herein from the draft document dated March 17, 2022. There are various meetings underway to further refine and expand the document. A semi-finalized version known as AI RMF 1.0 is targeted to be issued in January 2023.

Before I jump into the existing draft of the AI RMF, I would like to emphasize that anyone genuinely interested in AI Ethics ought to be especially keeping track of what the AI RMF consists of. Besides staying on top of the draft, you might also consider getting involved with the drafting effort and aid in the formulation of the AI Risk Management Framework all told (note that the NIST is undertaking public workshops that welcome such input). You indeed can help make history.

Let’s briefly explore an important mash-up that exists between AI Ethics and AI risks. AI risks are integral to the utility of Ethical AI. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few. You could readily claim that AI risk is immersed throughout all AI Ethics principles or precepts. A convenient mental model would be to envision a spreadsheet of sorts whereby the principles of AI Ethics as the horizontal elements (the columns) and AI risk is a vertical component (rows) that weave in and throughout the horizontals.

Speaking of AI risks brings up the varying nuance of what manner of AI one is alluding to. Despite those blaring headlines about the proclaimed human-like wonders of AI, there isn’t any AI today that is sentient. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on herein consists of the non-sentient AI that we have today.

If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

Let’s cover briefly some of the overall Ethical AI precepts that I’ve previously discussed in my columns to herein illustrate what ought to be a vital consideration for anyone and everyone that is crafting, fielding, or using AI. We’ll then dive into the topic of AI risks.

As stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable.
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop.
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency.
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity.
  • Reliability: AI systems must be able to work reliably.
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

I mentioned earlier herein that AI risk is a matter that intersects across all of the AI Ethics precepts. To help fully exhibit that semblance, consider a rewording of the key AI Ethics principles to illuminate the AI risk matter:

  • Transparency and associated AI Risks
  • Justice & Fairness and associated AI Risks
  • Non-Maleficence and associated AI Risks
  • Responsibility and associated AI Risks
  • Privacy and associated AI Risks
  • Beneficence and associated AI Risks
  • Freedom & Autonomy and associated AI Risks
  • Trust and associated AI Risks
  • Sustainability and associated AI Risks
  • Dignity and associated AI Risks
  • Solidarity and associated AI Risks

Before we unpack this, let’s consider what the word “risk” means.

I say this because risk as a catchword has different meanings depending upon whom you are speaking with. In exploring this facet, I will also bring up another NIST document that you should consider studying if you are going to get into the AI RMF, namely that there is an overall NIST Risk Management Framework (RMF) that covers broadly Information Technology systems and risk management, having been in place for a while. The AI RMF is essentially an instantiation of the broader RMF (you might cheekily say that the AI RMF is the son or daughter of the all-told RMF).

Per the overall NIST RMF, here is a definition of risk: “Risk is a measure of the extent to which an entity is threatened by a potential circumstance or event. Risk is also a function of the adverse impacts that arise if the circumstance or event occurs, and the likelihood of occurrence. Types of risk include program risk; compliance/regulatory risk; financial risk; legal risk; mission/business risk; political risk; security and privacy risk (including supply chain risk); project risk; reputational risk; safety risk; strategic planning risk.”

The NIST AI RMF draft defines risk this way: “Risk is a measure of the extent to which an entity is negatively influenced by a potential circumstance or event. Typically, risk is a function of 1) the adverse impacts that could arise if the circumstance or event occurs; and 2) the likelihood of occurrence. Entities can be individuals, groups, or communities as well as systems, processes, or organizations.”

Digging deeper and perhaps muddying the waters, the Stanford Encyclopedia of Philosophy handily points out that risk is often couched in five different connotations:

1) Risk is an unwanted event that may or may not occur

2) Risk is the cause of an unwanted event that may or may not occur

3) Risk is the probability of an unwanted event that may or may not occur

4) Risk is the statistical expectation value of an unwanted event that may or may not occur

5) Risk is the fact that a decision is made under conditions of known probabilities

For now, let’s collegially agree that we are going to within this discussion treat the notion of what risk is in a generalized manner as per the aforementioned NIST RMF and NIST AI RMF definitions and not get stuck on the tortuous variations. I trust then that you are comfortable with my above foundation of having settled for the time being the contextual meaning of AI and the meaning of risk.

An AI risk management framework is a means of ferreting out the risks of AI, along with hopefully managing those risks.

According to the AI RMF, here is the formal purpose or aspiration of the AI Risk Management Framework being formulated: “An AI Risk Management Framework (AI RMF, or Framework) can address challenges unique to AI systems. This AI RMF is an initial attempt to describe how the risks from AI-based systems differ from other domains and to encourage and equip many different stakeholders in AI to address those risks purposefully. This voluntary framework provides a flexible, structured, and measurable process to address AI risks throughout the AI lifecycle, offering guidance for the development and use of trustworthy and responsible AI.”

The NIST also realizes that an AI RMF as a proposed standard has to be readily useable, be updated as technology advances, and embody other core criteria: “A risk management framework should provide a structured, yet flexible, approach for managing enterprise and societal risk resulting from the incorporation of AI systems into products, processes, organizations, systems, and societies. Organizations managing an enterprise’s AI risk also should be mindful of larger societal AI considerations and risks. If a risk management framework can help to effectively address and manage AI risk and adverse impacts, it can lead to more trustworthy AI systems.”

Some of you that are a bit skeptical might be questioning why we need an AI RMF versus just relying on the generalized RMF that is already readily available. Aren’t we simply reinventing the wheel? The answer is no, we are not reinventing the wheel. A wheel is customizable to a particular need. A reasonable person would likely acknowledge that there are wheels of all different kinds of shapes and sizes. The wheel on an airplane is undoubtedly quite different from the wheel that is on a child’s tricycle. Sure, they are both wheels, but they are devised differently and ergo have different characteristics and can rightfully be examined distinctly too.

The AI RMF document expresses a similar sentiment: “Risks to any software or information-based system apply to AI; that includes important concerns related to cybersecurity, privacy, safety, and infrastructure. This framework aims to fill the gaps related specifically to AI.”

In the existing version of the AI RMF draft, they define four stakeholder groups:

  • AI System Stakeholders
  • Operators and Evaluators
  • External Stakeholders
  • General Public

The bulk of the attention about AI risk usually goes toward the AI system stakeholders. That makes sense. These are the stakeholders that are involved in the conceiving of, designing, building, and fielding AI. In addition, we can include those that acquire or license AI for use. We tend to view those stakeholders as the highly visible parties that did the heavy lifting in shepherding the AI system into existence and fostered its deployment.

You might not have equally thought of or considered instrumental the AI operators and evaluators. As stated in the AI RMF, the AI operators and evaluators do this: “Operators and evaluators provide monitoring and formal/informal test, evaluation, validation, and verification (TEVV) of system performance, relative to both technical and socio-technical requirements.” They are crucial to AI and also within the band of AI risks.

External stakeholders would encompass a wide array of entities including trade groups, advocacy groups, civil society organizations, and others. The general public consists of consumers and others that experience the risk associated with untoward AI.

You might be wondering how much risk is tolerable when it comes to AI.

Sorry to say that there is no particular number or assigned value that we can give to the amount of tolerable or acceptable risk that we might find worthwhile or societally permissible. For those of you that want a standardized designated numeric indication, you’ll need to temper that desire by this notable point in the AI RMF draft: “The AI RMF does not prescribe risk thresholds or values. Risk tolerance – the level of risk or degree of uncertainty that is acceptable to organizations or society – is the context and use case-specific.”

A recommended methodology by the AI RMF for examining and governing AI risk is depicted as consisting of base steps labeled as Map, Measure, and Manage. The Map function frames the risks of an AI system. The Measure function encompasses the tracking and analysis of AI risk. The Manage function makes use of the Map and Measure functions to then try and minimize adverse impacts while maximizing the benefits of the AI. According to the draft AI RMF, the later versions of the standard will include a Practice Guide as a companion to showcase examples and practices of using the AI RMF.

In the broader NIST RMF standard, there is an embellished set of seven steps that coincide with doing an overarching IT and systems risk management effort. I’ve found those seven steps handy to keep in mind, including when building and deploying AI systems.

The seven steps are (quoting from the NIST RMF standard):

1. Prepare to execute the RMF from an organization- and a system-level perspective by establishing a context and priorities for managing security and privacy risk.

2. Categorize the system and the information processed, stored, and transmitted by the system based on an analysis of the impact of loss.

3. Select an initial set of controls for the system and tailor the controls as needed to reduce risk to an acceptable level based on an assessment of risk.

4. Implement the controls and describe how the controls are employed within the system and its environment of operation.

5. Assess the controls to determine if the controls are implemented correctly, operating as intended, and producing the desired outcomes with respect to satisfying the security and privacy requirements.

6. Authorize the system or common controls based on a determination that the risk to organizational operations and assets, individuals, other organizations, and the Nation is acceptable.

7. Monitor the system and the associated controls on an ongoing basis to include assessing control effectiveness, documenting changes to the system and environment of operation, and conducting risk assessment.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase the nature of AI risks. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI risks, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Risks

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the nature of AI risks.

As a human driver, you are a finely tuned risk calculator.

That’s right, when you are driving a car you are in real-time tasked with figuring out the risks that a pedestrian might suddenly dart into the street, or that a car ahead of you will unexpectedly slam on its brakes. There is a decidedly murky haziness and fuzziness in the driving situations that we face.

You try to make as best an evaluation of the risks involved at every moment of driving and you then have to bear the consequences of your assessments. Lamentedly, there are about 40,000 car crash fatalities each year in the United States alone and about 2.5 million related injuries (see my collection of such stats at the link here). Sometimes you cut things pretty close and escape a bad situation by the skin of your teeth. Other times you misjudge and bump against something or collide with someone.

You are ordinarily mentally updating the risk aspects as the driving effort is underway. Imagine the simple case of a bunch of kids aiming to jaywalk. At first, you might rate the risk of their jaywalking and get struck as being quite high. But then you notice that a nearby adult is prodding them not to jaywalk, and thus the risk of the kids intruding out into the street and getting run down is lessened. Note though that the risk did not drop to zero risk, since they can still opt to enter the roadway.

There is a well-known risk-related standard in the automotive realm known as the Automotive Safety Integrity Level (ASIL) risk classification scheme, based on an official document referred to as ISO 26262. I’ve covered various AI self-driving car driving-oriented risk-related considerations at the link here and also the link here.

When determining risk while driving, here’s an equation that provides a means to get your arms around risk aspects:

  • Risk = Severity x (Exposure x Controllability)

Let’s explore the formula and its components.

Severity is important to consider when ascertaining risk while driving since you might be heading toward a brick wall that will end up causing you and your passengers to be injured or killed (a notably high severity outcome) while hitting some discarded soda cans on the freeway might be relatively low in severity. Formally per the ISO standard, severity is a measure of the potential harm that can arise and is categorized into (S0) No injuries, (S1) Light and moderate injuries, (S2) Severe injuries, (S3) Life-threatening and fatal injuries.

Exposure is whether the chances of the incident occurring are substantial versus being unlikely as to you being exposed to the matter (i.e., the state of being in an operational situation of a hazardous nature). Per the ISO standard, exposure can be divided into (E0) negligible, (E1) very low, (E2) low, (E3) medium, and (E4) high.

Controllability refers to the capability of being able to maneuver the car to avoid a pending calamity. This can range from avoiding the situation entirely or merely skirting it, or that no matter what you do there is insufficient means to steer, brake, or accelerate and avert the moment. The ISO standard indicates controllability can be divided into (C0) generally controllable, (C1) simply controllable, (C2) normally controllable, and (C3) difficult or uncontrollable.

By combining the three factors of severity, exposure, and controllability, you can arrive at an indication of the risk assessment for a given driving situation. Presumably, we do this in our heads, cognitively, though how we actually do so and whether we even use this kind of explicit logic is debatable since no one really knows how our minds work in this capacity.

Do not be misled by the seemingly mathematical formula and construe that the matter of deriving risk while driving is somehow entirely clear-cut. There is a tremendous amount of judgment that goes into how you as a human classify the exposure, the severity, and the controllability.

This deriving of driving risk is hard for humans. Trying to craft AI to do likewise is also extraordinarily difficult. Notably, a core capability of an AI driving system entails having to make algorithm-based decisions (ADM) about driving risks. You might be surprised to know that many of the AI driving systems today do not undertake the calculating and assessing of driving risk robustly. Generally, very crude and highly simplified approaches are used. Whether this will scale up to widespread adoption of self-driving cars is an open question. For more about this dilemma, tied with a famous thought experiment known as the Trolley Problem, see my analysis at this link here.

Another concern is that the AI driving systems are often programmed in a byzantine way and the portions that deal with driving risk aspects are buried deep within a morass of code. There is little transparency about how a particular automaker or self-driving tech firm has opted to program the driving risk capacities of their AI system. There is a likelihood that we will see regulatory and public scrutiny come to bear once self-driving cars become more prevalent.

Recall that the AI Risk Management Framework defined four stakeholder groups, for which self-driving cars can be readily viewed:

  • AI System Stakeholders — Automakers and self-driving tech firms
  • Operators and Evaluators – Fleet operators
  • External Stakeholders – City leaders, regulators, etc.
  • General Public – Pedestrians, bicyclists, etc.

The automakers and self-driving tech firms should be examining the risks associated with the AI that they are developing and fielding in self-driving cars. A mainstay of AI risk would be in the AI driving system elements, though there are other uses of AI in autonomous vehicles and driverless cars.

The expectation is that there will be fleet operators that will be in charge of running large sets of self-driving cars for use by the public. Those fleet operators are typically supposed to keep the autonomous vehicles in proper drivable shape and make sure that the self-driving cars are safe for use. Their focus is mainly aimed at the hardware and less so on dealing with the onboard software. In any case, they too should be considering the AI risks associated with self-driving cars and their operational uses.

Wherever self-driving cars are approved for public use, the odds are that various city, state, and at times federal levels of approval and possibly oversight will be undertaken. There are also various existing laws and newly enacted laws that govern how self-driving cars can be deployed onto public roadways, see my coverage at the link here. These public-minded stakeholders should also be examining the AI risks associated with self-driving cars.

As long as self-driving cars are placed on public highways and byways, the general public also should be thinking about the AI risks involved. Pedestrians are at risk of a self-driving car ramming into them. The same for bicyclists. All other roadway users are potentially vulnerable to AI risks entwined within the use of autonomous vehicles in any given locale.

Conclusion

We need to get more attention to AI risks. Having a standardized AI risk management framework will provide a handy tool for ascertaining AI risks. The odds are too that the expanding use of AI Ethics guidelines will carry along the need for determining AI risks, doing so as part and parcel of abiding by the precepts of Ethical AI.

I began this discussion by pointing out that General Patton said we should be explicitly calculating risk. He also famously exhorted that people should always do more than what is required of them.

I implore you to consider that even if you are not being required to examine AI risks, you should earnestly go beyond the norm and strive to do so. Let’s all face up to AI risks and make sure that we don’t climb out on a precarious societal perch that we cannot get safely back from.

Source: https://www.forbes.com/sites/lanceeliot/2022/04/11/ai-ethics-welcomes-the-prospects-of-a-standardized-ai-risk-management-framework-which-could-bolster-autonomous-self-driving-car-efforts-too/