Insidious AI-Based Proxy Discrimination Against Humans Is Dauntingly Vexing For AI Ethics, Which Can Occur Even In The Case Of Autonomous AI Self-Driving Cars

Let’s discuss proxy discrimination amid the actions of modern-day Artificial Intelligence (AI).

To get into the AI side of things, we’ll need to first set the stage about the overall aspects of discrimination and then delve into the perhaps surprising ways in which AI gets mired in this complicated and at times insidious matter. I’ll also be providing examples of AI-based proxy discrimination, including that this can occur even in the case of AI-infused autonomous vehicles such as self-driving cars.

Let’s get started.

A dictionary definition of discrimination would typically indicate that it is the act of an unjust nature that treats people differently based on perceived categories such as race, gender, age, and so on (such criteria are often described as consisting of protected classes). A form of discrimination known as direct discrimination entails overtly latching onto one of those categories, such as relatively clearly emphasizing say race or gender as the basis for the discrimination (these would be construed as first factors). This is perhaps the most transparent form of discrimination.

Another somewhat lesser realized possibility is the use of indirect discrimination. You might suggest this is a trickier form of discrimination as it is considered a step removed and can be challenging to ferret out. Indirect discrimination involves a kind of one or more step removed categorization selection. This is also commonly labeled as proxy discrimination since there is an intermediary factor that is serving as a proxy or stand-in for the underlying and connectable first factor.

To help clarify the seemingly abstract idea of indirect or proxy discrimination, we can consider a straightforward example.

Someone is applying for a home loan. Suppose a loan agent that is reviewing the application decides to turn down the loan and does so based on the race of the applicant. You could say that this is an example of direct discrimination. But suppose instead that the loan agent used the zip code of the applicant and opted to turn down the loan based on that factor. It would seem at an initial glance that zip code is not one of the factors usually considered discriminatory or a protected class. As such, the loan agent would appear to have steered clear of a discriminatory-laden decision.

The problem though could be that the zip code is actually a proxy for something else, an actual protected category or class. Perhaps this particular zip code is predominantly composed of a particular race or ethnicity and the use of indirect or proxy discrimination is taking place. You might generally know this type of example by the catchphrase of redlining.

You see, there is apparently a connection of sorts between the factor consisting of a zip code and the discriminatory factor of race in this instance. The zip code would appear to be an innocent or neutral factor in the face of this circumstance. Zipcode seems to most of us like a rather innocuous item and would not set off any alarm bells.

You might recall from your days of taking a course on statistics that there are statistical correlations that can arise between various factors, even factors that do not strike you as being logically correlated with each other. It could be that there is a pronounced correlation between zip code and race. Thus, the selection of zip code at first glance seems benign, but upon closer inspection, it is really a stand-in or proxy for the discriminatory protected class of race.

Here’s how a research paper described the notion of such correlations and the arising of proxy discrimination: “Discrimination does not have to involve direct use of a protected class; class memberships may not even take part in the decision. Discrimination can also occur due to correlations between the protected class and other attributes. The legal framework of disparate impact addresses such cases by first requiring significantly different outcomes for the protected class, regardless of how the outcomes came to be. An association between loan decisions and race due to the use of applicant address, which itself is associated with race, is an example of this type of discrimination” (as stated in the paper entitled Proxy Discrimination In Data-Driven Systems: Theory And Experiments With Machine Learning Programs, by Anupam Datta, Matt Fredrikson, Gihyuk Ko, Piotr Mardziel, and Shayak Sen).

Now that we’ve got the fundamentals of proxy discrimination set on the table, we can introduce the aspects of how AI can essentially embed a computationally rendered version of proxy discrimination.

I would like to focus on today’s AI and not some futuristic AI that some are saying will be sentient and pose an existential risk (that’s a different story, which I’ve covered at the link here). Despite a myriad of blaring headlines currently proclaiming that AI has somehow reached sentience and embodies human knowledge and reasoning, please be aware that this overstated AI hyperbole is pure rubbish since we are still relying upon number-crunching in today’s algorithm decision-making (ADM) as undertaken by AI systems.

Even the vaunted Machine Learning (ML) and Deep Learning (DL) consist of computational pattern matching, meaning that numbers are still at the core of the exalted use of ML/DL. We do not know if AI reaching sentience is possible. Could be, might not be. No one can say for sure how this might arise. Some believe that we will incrementally improve our computational AI efforts such that a form of sentience will spontaneously occur. Others think that the AI might go into a kind of computational supernova and reach sentience pretty much on its own accord (typically referred to as the singularity). For more on these theories about the future of AI, see my coverage at the link here.

So, let’s not kid ourselves and falsely believe that contemporary AI is able to think like humans. We can try to mimic in the AI what we believe human thinking perhaps consists of. So far, we have not been able to crack the elusive elements of devising AI that can embed common sense and other cornerstones of human thought.

You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

My extensive coverage of AI Ethics and Ethical AI can be found at this link here and this link here, just to name a few.

You might be perplexed as to how AI could imbue the same kinds of adverse biases and inequities that humans do. We tend to think of AI as being entirely neutral, unbiased, simply a machine that has none of the emotional sway and foul thinking that humans might have. One of the most common means of AI falling into the biases and inequities dourness happens when using Machine Learning and Deep Learning, partially as a result of relying upon collected data about how humans are making decisions.

Allow me a moment to elaborate.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. The Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making or ADM of AI axiomatically becomes laden with inequities.

Not good.

AI systems are being devised that contain both direct discrimination and also stealthy indirect or proxy discrimination. As the same afore referenced research paper mentioned: “Machine Learning systems, however, are constructed on the basis of observational data from the real world, with its many historical or institutionalized biases. As a result, they inherit biases and discriminatory practices inherent in the data. Adoption of such systems leads to unfair outcomes and the perpetuation of biases. Examples are plentiful: race being associated with predictions of recidivism; gender affecting displayed job-related ads; race affecting displayed search ads; Boston’s Street Bump app focusing pothole repair on affluent neighborhoods; Amazon’s same day delivery being unavailable in black neighborhoods; and Facebook showing either “white” or “black” movie trailers based upon “ethnic affiliation”. Various instances of discrimination are prohibited by law.”

If we had AI that was subject solely to embedding direct discrimination troubles, the odds are that we might have a heightened fighting chance of ferreting out such computational maladies. Unfortunately, the world is not so easy. Today’s AI is probably just as likely if not more likely to imbue proxy or indirect discrimination. That’s a sad face scenario. The deeper computational morass underpinning proxy discrimination can be one heck of a tough nut to crack.

As stated by the Commissioner of the Federal Trade Commission (FTC): “When algorithmic systems engage in proxy discrimination, they use one or more facially neutral variables to stand in for a legally protected trait, often resulting in disparate treatment of or disparate impact on protected classes for certain economic, social, and civic opportunities. In other words, these algorithms identify seemingly neutral characteristics to create groups that closely mirror a protected class, and these ‘proxies’ are used for inclusion or exclusion” (as noted in the article entitled “Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission” published in the Yale Journal of Law & Technology, by Commissioner Rebecca Kelly Slaughter, August 2021).

One aspect to keep in mind is that AI is not somehow alone in practicing proxy discrimination. Nor is proxy discrimination some new-fangled concoction. We have had proxy discrimination for a long time, certainly long before the advent of AI. The FTC Commissioner echoed this same important realization: “Proxy discrimination is not a new problem — the use of facially neutral factors that generate discriminatory results is something that society and civil rights laws have been grappling with for decades” (again in the Yale Journal of Law & Technology).

Do AI developers purposely craft their AI systems to contain proxy discrimination?

Well, you could divide AI efforts into those that inadvertently rely upon proxy discrimination and those that intentionally do so. I would guess that by and large, most AI builders are falling into the proxy discrimination computational morass by accidental or happenstance actions. This though is not an excuse for what they are doing. They are still responsible for the AI that they have devised and cannot simply wave their hands and proclaim that they didn’t know what was happening. It is on their shoulders to try and ensure that no such discrimination is taking place by their AI. Meanwhile, those that deviously and purposely construct their AI with proxy discrimination are to be taken to task and held accountable accordingly.

I’d like to add a twist that will possibly get your head spinning.

Some assert that the better we get at devising AI that there is the likelihood that we will witness more instances of AI that imbue proxy discrimination. You might be puzzled why this would be the case. The hope and dream would be that advances in AI would reduce the chances of the computational nightmare arising of landing into the improper waters of proxy discrimination.

An intriguing angle is identified in this study published in the Iowa Law Review: “Instead, AIs use training data to discover on their own what characteristics can be used to predict the target variable. Although this process completely ignores causation, it results in AIs inevitably ‘seeking out’ proxies for directly predictive characteristics when data on these characteristics is not made available to the AI due to legal prohibitions. Simply denying AIs access to the most intuitive proxies for directly predictive variables does little to thwart this process; instead it simply causes AIs to produce models that rely on less intuitive proxies. Thus, this Article’s central argument is that as AIs become even smarter and big data becomes even bigger, proxy discrimination will represent an increasingly fundamental challenge to anti-discrimination regimes that seek to prohibit discrimination based on directly predictive traits” (as mentioned in the article entitled Proxy Discrimination in the Age of Artificial Intelligence and Big Data, by Anya Prince and Daniel Schwarcz).

Let’s try to lay out the logic of this chilling prediction.

Suppose that AI developers inexorably become aware that they should avoid allowing their Machine Learning and Deep Learning models to strive toward proxy discrimination (one would hope that they are already on the lookout for direct discrimination). Okay, so the AI builders do what they can to avoid a computational latching onto protected factors. But let’s assume that this is done on a somewhat obvious basis, such as restricting any one-two step types of proxies.

The computational models go deeper into the data and find a three-step or maybe ten-step removed linkage of proxy discrimination. The AI developers are seemingly happy that a two-step can be shown as not being part of their ML/DL system. Meanwhile, they perhaps don’t realize that a three-step or ten-step or some other levels of sneakiness have mathematically been discovered. Keep in mind that the AI is not sentient and it is not mindfully trying to do this. We are still referring to AI that is non-sentient and acting based on numbers and computations.

Yikes, the disquieting fact that the AI is “advancing” and yet we seem to be heading into a gloomier state of affairs is rather exasperating and perhaps infuriating. Whereas on the one hand we might be pleased that the awareness of averting proxy discrimination is getting more attention, the problem won’t simply go away. Efforts to avoid AI-based proxy discrimination might push the discriminatory computational discoveries deeper and deeper from being disclosed or figured out by humans.

This reminds me of the old-time cartoons of when a person has gotten themselves into quicksand. The more they thrash around, the worse things get. In a sense, the person causes their own demise by fighting fiercely against the quicksand. This is certainly ironic since you would normally expect that fighting against something would lead to your escape or release.

Not necessarily so.

Experts will tell you that if you are ever caught in quicksand, your sensible option will be to try and relax your way out of the dire situation. You should attempt to float on the top of the quicksand, possibly leaning back and getting your feet level with your head. Wild thrashing is not desirable and will undoubtedly reduce your chances of escape. The better odds are that you strive to float or lightly swim your way out, or at least reach a position in the quicksand whereby you can reach a branch or something else to then pull yourself further out.

Can we use that type of advice to combat the AI imbuing of proxy discrimination?

Kind of.

First, knowing that proxy discrimination can occur is a keystone element for those that devise and field AI systems. All stakeholders need to be thinking about this. Management that oversees AI projects must be on top of this since it is not just the “AI coders” alone that are part of the predicament. We also are likely to see regulators weighing in too, such as enacting new laws to try and curtail or at least catch AI that has embedded discriminatory practices. Etc.

As per the Iowa Law Review study, we might strive toward having AI laws and regulations that impose a duty to showcase the data being utilized for the ML/DL: “For instance, impacted antidiscrimination regimes could allow, and perhaps even require, that firms using predictive AIs collect data about individuals’ potential membership in legally protected classes. In some cases, this data should be shared with regulators and/or disclosed to the public in summary form. Such data is necessary for firms, regulators, litigants, and others to test whether any particular AI is, in fact, engaging in proxy discrimination” (per the article by Anya Prince and Daniel Schwarcz).

Other possibilities include using more varied data and a wider set of data sources when devising a Machine Learning and Deep Learning model. Another is that AI developers might be required to showcase that their AI system does not employ proxy discrimination. Trying to mathematically showcase or prove this is lack or absence of proxy discrimination going to be notably challenging, to say the least.

On a related notion, I am an advocate of trying to use AI as part of the solution toward AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the proxy discrimination abyss (see my analysis of such capabilities at the link here).

At this juncture of this discussion, I’d bet that you are desirous of some additional examples that might showcase the conundrum of AI-based proxy discrimination.

I’m glad you asked.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI-based proxy discrimination, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI-Based Proxy Discrimination

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the exploration of AI-based proxy discrimination.

Envision that an AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.

Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.

That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale.

Turns out that two unseemly concerns start to arise about the otherwise innocuous and generally welcomed AI-based self-driving cars, specifically:

a. Where the AI is roaming the self-driving cars for picking up rides was looming as a vocalized concern

b. How the AI is treating awaiting pedestrians that do not have the right-of-way was rising up as a pressing issue

At first, the AI was roaming the self-driving cars throughout the entire town. Anybody that wanted to request a ride in the self-driving car had essentially an equal chance of hailing one. Gradually, the AI began to primarily keep the self-driving cars roaming in just one section of town. This section was a greater money-maker and the AI system had been programmed to try and maximize revenues as part of the usage in the community.

Community members in the impoverished parts of the town were less likely to be able to get a ride from a self-driving car. This was because the self-driving cars were further away and roaming in the higher revenue part of the locale. When a request came in from a distant part of town, any request from a closer location that was likely in the “esteemed” part of town would get a higher priority. Eventually, the availability of getting a self-driving car in any place other than the richer part of town was nearly impossible, exasperatingly so for those that lived in those now resource-starved areas.

You could assert that the AI pretty much landed on a form of proxy discrimination (also often referred to as indirect discrimination). The AI wasn’t programmed to avoid those poorer neighborhoods. Instead, it “learned” to do so via the use of the ML/DL.

The thing is, ridesharing human drivers were known for doing the same thing, though not necessarily exclusively due to the money-making angle. There were some of the ridesharing human drivers that had an untoward bias about picking up riders in certain parts of the town. This was a somewhat known phenomenon and the city had put in place a monitoring approach to catch human drivers doing this. Human drivers could get in trouble for carrying out unsavory selection practices.

It was assumed that the AI would never fall into that same kind of quicksand. No specialized monitoring was set up to keep track of where the AI-based self-driving cars were going. Only after community members began to complain did the city leaders realize what was happening. For more on these types of citywide issues that autonomous vehicles and self-driving cars are going to present, see my coverage at this link here and which describes a Harvard-led study that I co-authored on the topic.

This example of the roaming aspects of the AI-based self-driving cars illustrates the earlier indication that there can be situations entailing humans with untoward biases, for which controls are put in place, and that the AI replacing those human drivers is left scot-free. Unfortunately, the AI can then incrementally become mired in akin biases and do so without sufficient guardrails in place.

This showcases how AI-based proxy discrimination can perniciously arise.

A second example involves the AI determining whether to stop for awaiting pedestrians that do not have the right-of-way to cross a street.

You’ve undoubtedly been driving and encountered pedestrians that were waiting to cross the street and yet they did not have the right-of-way to do so. This meant that you had discretion as to whether to stop and let them cross. You could proceed without letting them cross and still be fully within the legal driving rules of doing so.

Studies of how human drivers decide on stopping or not stopping for such pedestrians have suggested that sometimes the human drivers make the choice based on untoward biases. A human driver might eye the pedestrian and choose to not stop, even though they would have stopped had the pedestrian had a different appearance, such as based on race or gender. I’ve examined this at the link here.

Imagine that the AI-based self-driving cars are programmed to deal with the question of whether to stop or not stop for pedestrians that do not have the right-of-way. Here’s how the AI developers decided to program this task. They collected data from the town’s video cameras that are placed all around the city. The data showcases human drivers that stop for pedestrians that do not have the right-of-way and human drivers that do not stop. It is all collected into a large dataset.

By using Machine Learning and Deep Learning, the data is modeled computationally. The AI driving system then uses this model to decide when to stop or not stop. Generally, the idea is that whatever the local custom consists of, this is how the AI is going direct the self-driving car.

To the surprise of the city leaders and the residents, the AI was evidently opting to stop or not stop based on the age of the pedestrian. How could that happen?

Upon a closer review of the video of human driver discretion, it turns out that many of the instances of not stopping entailed pedestrians that had a walking cane of a senior citizen. Human drivers were seemingly unwilling to stop and let an aged person cross the street, presumably due to the assumed length of time that it might take for someone to make the journey. If the pedestrian looked like they could quickly dart across the street and minimize the waiting time of the driver, the drivers were more amenable to letting the person cross.

This got deeply buried into the AI driving system. The sensors of the self-driving car would scan the awaiting pedestrian, feed this data into the ML/DL model, and the model would emit to the AI whether to stop or continue. Any visual indication that the pedestrian might be slow to cross, such as the use of a walking cane, mathematically was being used to determine whether the AI driving system should let the awaiting pedestrian cross or not. You could contend that this was a form of proxy discrimination based on age.

Conclusion

There is a multitude of ways to try and avoid devising AI that has proxy discrimination included or that over time gleans such biases. As much as possible, the idea is to catch the problems before you go into high gear and deploy the AI. Hopefully, neither direct discrimination nor proxy discrimination won’t get out the door, so to speak.

As earlier pointed out, one approach involves ensuring that AI developers and other stakeholders are aware of AI Ethics and thus spur them to be on their toes to devise the AI to avert these matters. Another avenue consists of having the AI self-monitor itself for unethical behaviors and/or having another piece of AI that monitors other AI systems for potentially unethical behaviors. I’ve covered numerous other potential solutions in my writings.

A final thought for now.

You might know that Lou Gehrig famously said that there is no room in baseball for discrimination. Parlaying from that same line of thinking, you could boldly declare that there is no room in AI for discrimination.

We all need to get up to bat and find ways to prevent discrimination from getting infused within AI systems. For the sake of us all, we need to hit this out of the ballpark.

Source: https://www.forbes.com/sites/lanceeliot/2022/04/08/insidious-ai-based-proxy-discrimination-against-humans-is-dauntingly-vexing-for-ai-ethics-which-can-occur-even-in-the-case-of-autonomous-ai-self-driving-cars/