AI Ethics And AI Law Are Moving Toward Standards That Explicitly Identify And Manage AI Biases

Have you ever played fifty-two card pick-up?

It is not a game that you would normally willingly undertake. Here’s why. Someone offers to you that it is an allegedly fun sport and if you take the sweet bait they then toss an entire deck of playing cards into the air and summarily onto the floor. The person then gives you a cheeky smile and tells you to go ahead and pick up the cards. That’s the entire game.

Prankster!

I do have a somewhat thoughtful question to ask you about this.

Suppose that one of the cards slipped underneath a nearby sofa. When you had finished picking up all the cards, you would know that one was missing because there would only be fifty-one in your hand.

The question is, could you determine which card was missing?

I’m sure that you would immediately say that you could easily figure out which card was not in your hands. All you would have to do is put the deck into order. You know that a standard deck consists of four suits and that within each suit the cards are numbered from one to ten and then into Jack, Queen, and King.

You know this because a standard deck of playing cards is based on a standard.

Whoa, that statement might seem like one of those totally obvious assertions. Well, yes, of course, a standard playing deck is based on a standard. We all know that. My point is that by having a standard we can rely upon the standard when so needed. Besides being able to deduce what card is missing from a deck, you can also readily play zillions of well-known card games with other people. Once someone is told the rules of a game, they are directly able to play because they already fully know what the deck consists of. You don’t need to explain to them that the deck has four suits and variously numbered cards. They already know that to be the case.

Where am I going with this?

I am trying to take you down a path that is a vital means of making progress in the field of AI and especially the realm of AI Ethics and Ethical AI. You see, we need to try and come up with widespread and all-agreed upon standards about AI Ethics. If we can do so, it will enhance the ease of getting adoption of Ethical AI and demonstrably aim to improve the AI systems that keep getting pell-mell tossed into the marketplace (like an unnumbered and unordered deck of wild cards). For my ongoing and extensive coverage of AI Ethics, Ethical AI, and AI Law, see the link here and the link here, just to name a few.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

I bring this up to provide a foundation for my discussion herein that will focus on a particular segment or portion of the broader realm of AI Ethics, namely as mentioned earlier the specific element of AI biases. The reason too that I share this topic with you is that a document released by the National Institute of Standards and Technology (NIST) is trying to get us to inch our way toward a standard characterizing AI biases. The document is entitled Towards A Standard For Identifying And Managing Bias In Artificial Intelligence by authors Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall, and was published by U.S. Department of Commerce, NIST Special Publication 1270, in March 2022.

We will be unpacking this handy and encouraging effort toward establishing what we mean by AI biases. The old saying is that you cannot manage that which you cannot measure. By having a standard that lays out the variety of AI biases, you can begin to measure and manage the AI biases scourge.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Underlying many of those key AI Ethics precepts is the insidious nature of AI biases.

Just like a deck of cards, it sure would be nifty if we could somehow group together the AI biases into a set of “suits” or categories. Indeed, the NIST document proffers a suggested grouping.

Three major categories are being proposed:

1) Systemic Biases

2) Statistical and Computational Biases

3) Human Biases

Whether all AI biases fit neatly within one of those three categories is certainly something to be considered. You can assuredly argue that some AI biases fall into one, two, or all three categories at the same time. Furthermore, you might claim that more categories deserve to be mentioned, such as some fourth, fifth, sixth, or more series of groupings.

I hope that’s what you are thinking because we need to get everyone involved in helping to shape these standards. If you are riled up with the way that these standards are first shaping up, I urge you to turn that energy into aiding the rest of us in making those budding standards as robust and complete as they can be carved into.

For now, we can take a closer look at the proposed three categories and see what kind of a hand we’ve been dealt with so far (yes, I am going to continue to use an analogy to a deck of playing cards, doing so throughout the entirety of this written piece, you can bet your bottom dollar on that not so hidden ace of a theme).

What is meant by referring to systemic biases?

Here’s what the NIST document says: “Systemic biases result from procedures and practices of particular institutions that operate in ways which result in certain social groups being advantaged or favored and others being disadvantaged or devalued. This need not be the result of any conscious prejudice or discrimination but rather of the majority following existing rules or norms. Institutional racism and sexism are the most common examples” (note that this is merely a short excerpt and readers are encouraged to see the fuller explanation).

AI comes into the mix of systemic biases by providing a means of conveying and applying those biases in AI-based apps. Whenever you use an AI-infused piece of software, for all you know it might contain a slew of biases that are already baked into the system via the companies and industry practices that led to the making of the AI. As per the NIST study: “These biases are present in the datasets used in AI, and the institutional norms, practices, and processes across the AI lifecycle and in broader culture and society.”

Next, consider the set of biases that are labeled as being statistical and computational biases.

The NIST document states this: “Statistical and computational biases stem from errors that result when the sample is not representative of the population. These biases arise from systematic as opposed to random error and can occur in the absence of prejudice, partiality, or discriminatory intent. In AI systems, these biases are present in the datasets and algorithmic processes used in the development of AI applications, and often arise when algorithms are trained on one type of data and cannot extrapolate beyond those data.”

This type of statistical and computational bias is often cooked into an AI system that uses Machine Learning (ML) and Deep Learning (DL). Bringing up the hefty matter of contemporary ML/DL necessitates a related side tangent about what AI is and what ML/DL is.

Let’s make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning and Deep Learning, which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

This brings us squarely to the third category of the NIST set of three groupings, specifically the role of human biases in the emergence of AI biases. Here’s what the NIST document indicated: “Human biases reflect systematic errors in human thought based on a limited number of heuristic principles and predicting values to simpler judgmental operations. These biases are often implicit and tend to relate to how an individual or group perceives information (such as automated AI output) to make a decision or fill in missing or unknown information. These biases are omnipresent in the institutional, group, and individual decision-making processes across the AI lifecycle, and in the use of AI applications once deployed.”

You’ve now gotten a rapid-fire introduction to the three categories.

I’d like to share with you some additional food for thought as expressed in the NIST document. A chart in their narrative provides a useful summary of the key questions and considerations that underlie each of the three sets of AI biases. I list them here for your convenience of reference and edification.

#1: Systemic Biases

  • Who is counted and who is not counted?

— Issues with latent variables

— Underrepresentation of marginalized groups

— Automation of inequalities

— Underrepresentation in determining utility function

— Processes that favor the majority/minority

— Cultural bias in the objective function (best for individuals vs best for the group)

  • How do we know what is right?

— Reinforcement of inequalities (groups are impacted more with higher use of AI)

— Predictive policing more negatively impacted

— Widespread adoption of ridesharing/self-driving cars/etc. may change policies that impact population based on use

#2: Statistical and Computational Biases

  • Who is counted and who is not counted?

— Sampling and selection bias

— Using proxy variables because they are easier to measure

— Automation bias

— Likert scale (categorical to ordinal to cardinal)

— Nonlinear vs linear

— Ecological fallacy

— Minimizing the L1 vs. L2 norm

— General difficulty in quantifying contextual phenomena

  • How do we know what is right?

— Lack of adequate cross-validation

— Survivorship bias

— Difficulty with fairness

#3: Human Biases

  • Who is counted and who is not counted?

— Observational bias (streetlight effect)

— Availability bias (anchoring)

— McNamara fallacy

— Groupthink leads to narrow choices

— Rashomon effect leads to subjective advocacy

— Difficulty in quantifying objectives may lead to McNamara fallacy

  • How do we know what is right?

— Confirmation bias

— Automation bias

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase the three categories of AI biases. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the three proposed categories of AI biases, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Biases

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the three categories of AI biases.

Envision that an AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.

Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.

That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale.

We shall consider next how systemic biases might come to play in this context of self-driving cars.

Some pundits are very worried that self-driving cars will be the province of only the wealthy and the elite. It could be that the cost to use self-driving cars will be prohibitively expensive. Unless you’ve got big bucks, you might not ever see the inside of a self-driving car. Those that will be utilizing self-driving cars are going to have to be rich, it is purportedly contended.

As such, some disconcertingly exhort that a form of systemic bias will permeate the advent of AI-based self-driving cars. The overall autonomous vehicle industrial system as a whole will keep self-driving cars out of the hands of those that are poor or less affluent. This might not necessarily be by overt intent and just turns out that the only believed way to recoup the burdensome costs of having invented self-driving cars will be to charge outrageously high prices.

If you retort that today there are these self-driving car tryouts that are allowing the everyday person to use, thus it seems apparent that you don’t need to be rich per se, the counterargument is that this is a kind of shell game as it were. The automakers and self-driving tech firms are supposedly willing to make it appear as though the cost will not be a substantive barrier. They are doing this for public relations purposes right now and will jack up the prices once they get the wrinkles figured out. A conspiracist might even claim that the “guinea pigs” as everyday persons are being perniciously used to enable the rich to ultimately get richer.

So, given that rather contentious matter, and putting my own two cents on this sordid topic, I do not believe that self-driving cars will be outpriced for everyday use. I won’t go into the details herein as to my basis for making such a claim and invite you to see my mindful discussions at the link here and also at the link here.

Moving on, we can next consider the matter of AI-related statistical and computational biases.

Contemplate the seemingly inconsequential question of where self-driving cars will be roaming to pick up passengers. This seems like an abundantly innocuous topic. We will use the tale of the town or city that has self-driving cars to highlight the perhaps surprisingly potential specter of AI-related statistical and computational biases.

At first, assume that the AI was roaming the self-driving cars throughout the entire town. Anybody that wanted to request a ride in the self-driving car had essentially an equal chance of hailing one. Gradually, the AI began to primarily keep the self-driving cars roaming in just one section of town. This section was a greater money-maker and the AI system had been programmed to try and maximize revenues as part of the usage in the community.

Community members in the impoverished parts of the town were less likely to be able to get a ride from a self-driving car. This was because the self-driving cars were further away and roaming in the higher revenue part of the locale. When a request came in from a distant part of town, any request from a closer location that was likely in the “esteemed” part of town would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town was nearly impossible, exasperatingly so for those that lived in those now resource-starved areas.

You could assert that the AI pretty much landed on a form of statistical and computational biases, akin to a form of proxy discrimination (also often referred to as indirect discrimination). The AI wasn’t programmed to avoid those poorer neighborhoods. Instead, it “learned” to do so via the use of the ML/DL.

It was assumed that the AI would never fall into that kind of shameful quicksand. No specialized monitoring was set up to keep track of where the AI-based self-driving cars were going. Only after community members began to complain did the city leaders realize what was happening. For more on these types of citywide issues that autonomous vehicles and self-driving cars are going to present, see my coverage at this link here and which describes a Harvard-led study that I co-authored on the topic.

For the third category of human biases as related to AI biases, we turn to an example that involves the AI determining whether to stop for awaiting pedestrians that do not have the right-of-way to cross a street.

You’ve undoubtedly been driving and encountered pedestrians that were waiting to cross the street and yet they did not have the right-of-way to do so. This meant that you had discretion as to whether to stop and let them cross. You could proceed without letting them cross and still be fully within the legal driving rules of doing so.

Studies of how human drivers decide on stopping or not stopping for such pedestrians have suggested that sometimes the human drivers make the choice based on untoward biases. A human driver might eye the pedestrian and choose to not stop, even though they would have stopped had the pedestrian had a different appearance, such as based on race or gender. I’ve examined this at the link here.

Imagine that the AI-based self-driving cars are programmed to deal with the question of whether to stop or not stop for pedestrians that do not have the right-of-way. Here’s how the AI developers decided to program this task. They collected data from the town’s video cameras that are placed all around the city. The data showcases human drivers that stop for pedestrians that do not have the right-of-way and human drivers that do not stop. It is all collected into a large dataset.

By using Machine Learning and Deep Learning, the data is modeled computationally. The AI driving system then uses this model to decide when to stop or not stop. Generally, the idea is that whatever the local custom consists of, this is how the AI is going direct the self-driving car.

To the surprise of the city leaders and the residents, the AI was evidently opting to stop or not stop based on the age of the pedestrian. How could that happen?

Upon a closer review of the video of human driver discretion, it turns out that many of the instances of not stopping entailed pedestrians that had a walking cane of a senior citizen. Human drivers were seemingly unwilling to stop and let an aged person cross the street, presumably due to the assumed length of time that it might take for someone to make the journey. If the pedestrian looked like they could quickly dart across the street and minimize the waiting time of the driver, the drivers were more amenable to letting the person cross.

This got deeply buried into the AI driving system. The sensors of the self-driving car would scan the awaiting pedestrian, feed this data into the ML/DL model, and the model would emit to the AI whether to stop or continue. Any visual indication that the pedestrian might be slow to cross, such as the use of a walking cane, mathematically was being used to determine whether the AI driving system should let the awaiting pedestrian cross or not.

You could contend that this was a reliance on a preexisting human bias.

Conclusion

Some final thoughts for now.

There is a popular saying that you cannot change the cards that you are dealt and must instead learn how to adequately play with whatever hand you’ve been given.

In the case of AI biases, if we don’t fervently get on top of establishing AI Ethics across the board and especially solidify the characterization of AI biases, the kinds of hands we are going to be dealt with will be overflowing with seedy unethical, and possibly unlawful stratum. We have to stop those cards from ever being dealt, to begin with. The valiant aim to create and promulgate Ethical AI standards is a crucial tool to combat the rising tsunami of upcoming AI For Bad.

You can decidedly take to the bank that rampant AI bias and unethical AI will be like a flimsy house of cards, imploding upon itself and likely being disastrous for all of us.

Let’s play to win, doing so with suitably ethical AI.

Source: https://www.forbes.com/sites/lanceeliot/2022/10/06/ai-ethics-and-ai-law-are-moving-toward-standards-that-explicitly-identify-and-manage-ai-biases/