AI Ethics Skeptical About Establishing So-Called Red Flag AI Laws For Calling Out Biased Algorithms In Autonomous AI Systems

Let’s talk about Red Flag Laws.

You undoubtedly know that the notion of Red Flag Laws has been widely covered in the news lately. Headlines covering the topic are aplenty. Passions and impassioned debates on such matters are on the top of mind as a societal concern and entailing present-day and rapidly emerging Red Flag Gun Laws.

I’d dare say though that you might not be familiar with other Red Flag Laws enacted in the late 1800s pertaining to motorized vehicles and the forerunners of today’s everyday modern automobiles. Yes, that’s right, Red Flag Laws go back in history, though covering other topics in comparison to today’s contemporary focus. These are typically referred to as Red Flag Traffic Laws.

These now century-old and altogether defunct laws required that any motorized carriage or engine propelled by steam was at that time to have an adult precede the vehicle and be carrying a red flag for warning purposes. The idea was that livestock might get alarmed by those noisy and cantankerous contraptions that barreled slowly and unevenly down the dirt or marginally paved roads, thus having someone walk along in front of the contrivance while vigorously waving a red flag could hopefully avoid calamities from arising. In case you were wondering, railroads and trains were considered excluded from the same laws as they were vehicles integrally bound to rails and had other laws covering their actions.

Imagine having to wave red flags today as a requirement for every car on our public roadways.

For example, an ordinary motorist coming down your neighborhood street would have to ensure that an adult waving a red flag was present and paraded in front of the moving car. This would have to take place for each and every vehicle passing down your street. Maybe people would become red flag jobbers that hired out to passing car drivers that otherwise didn’t have a friend or relative that could go in front of them and do the stipulated waving action.

We nowadays tend to associate highway-related red flag waving with roadway construction sites. As you get near a dug-up road, workers will be holding aloft a red flag to grab your attention. This tells you to slow down and be on alert. There could be a bulldozer that is going to edge into your path. A giant hole might be up ahead and you’ll need to cautiously traverse around it.

But let’s get back to the 1800’s use of red flags.

Believe it or not, the red flag-waver was supposed to be at least one-eighth of a mile furtherance in advance of the upcoming motorized machine. That seems like quite a lengthy distance. One supposes though that this made abundant sense in those days. The startling noises of the engine and perhaps the mere sight of the vehicle might be enough to get animals unnerved. Some of the Red Flag Laws of that era required too that a shining red light be held aloft during nighttime so that a visually apparent red precautionary warning could be seen from a darkened distance.

In general, I think it is fair to assert that we as a society tend to associate a red flag as a kind of signal or signage that something is potentially amiss or at least needs our devout attention.

Get ready for a bit of a twist on this red flag phenomenon.

There is a being floated contention that we should require red flag provisions when it comes to Artificial Intelligence (AI).

That’s a bit startling and a surprising concept that gets many heads scratching. You might be puzzled as to how or why there ought to be so-called Red Flag AI Laws. Please note that I’m labeling this as Red Flag AI Laws to differentiate the matter from Red Flag Traffic Laws (such as those of the late 1800s) and also to set them apart from today’s other more prevalent Red Flag Gun Laws.

Do we actually need Red Flag AI Laws that are distinctly and solely oriented to AI matters?

Those favoring the proposed approach would insist that we absolutely need legal provisions that would aid in clamping down on AI that contains undue biases and acts in discriminatory ways. Right now, the building and deployment of AI are akin to a Wild West anything-goes circumstance. Efforts to rein in bad AI are currently depending upon the formulation and adoption of AI Ethics guidelines. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

Laws that box in bad AI are slowly being devised and enacted, see my coverage at the link here. Some worry that lawmakers are not going fast enough. It seems as though the flood gates of allowing biased AI to be fostered in the world are largely wide open right now. Hand-wringing says that by the time new laws get onto the books, the evil genie will already be out of the bottle.

Not so fast, the counterarguments go. Worries are that if laws are too quickly put in place we will kill the golden goose, as it were, whereby AI efforts will dry up and we will not get the societally boosting benefits of new AI systems. AI developers and firms wishing to use AI might get spooked if a byzantine array of new laws governing AI are suddenly put into place at the federal, state, and local levels, not to mention the international AI-related laws that are marching forward too.

Into this messy affair comes the call for Red Flag AI Laws.

Before getting into some more meat and potatoes about the wild and woolly considerations underlying the envisioned Red Flag AI Law, let’s establish some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s return to our focus on Red Flag AI Laws.

The underlying concept is that people would be able to raise a red flag whenever they believed that an AI system was operating in an unduly biased or discriminatory fashion. You wouldn’t be raising a physical flag per se, and instead would simply be using some electronic means to make your concerns known. The red flag part of the scheme or approach is more so a metaphor than a physical embodiment.

Pretend that you were applying for a home loan. You opt to use an online banking service to apply for a loan. After entering some personal data, you wait momentarily for the AI system that is being used to decide whether you are loan worthy or not. The AI tells you that you’ve been turned down for the loan. Upon requesting an explanation of why you were rejected, the textual narrative seems to suggest to you that the AI was using undue biased factors as part of the decision-making algorithm.

Time to raise a Red Flag about the AI.

Where exactly will this red flag be waving?

That’s a million-dollar question.

One viewpoint is that we should set up a nationwide database that would allow people to mark their AI-relevant red flags. Some say that this should be regulated by the federal government. Federal agencies would be responsible for examining the red flags and coming to the aid of the general public as to the veracity and dealing with presumably “bad AI” that stoked the red flag reporting tallies.

A national Red Flag AI Law would seemingly be established by Congress. The law would spell out what an AI-pertinent red flag is. The law would describe how these AI grousing red flags are raised. And so on. It could also be the case that individual states might also opt to craft their own Red Flag AI Laws. Perhaps they do so in lieu of a national initiative, or they do so to amplify particulars that are especially appealing to their specific state.

Critics of a federal or any governmental-backed Red Flag AI program would argue that this is something that private industry can do and we don’t need Big Brother to come to the fore. The industry could establish an online repository into which people can register red flags about AI systems. A self-policing action by the industry would sufficiently deal with these issues.

A qualm about the purported industry approach is that it seems to smack of cronyism. Would firms be willing to abide by some privately run Red Flag AI database? Many firms would potentially ignore the marked red flags about their AI. There would not be sharp teeth toward getting companies to deal with the entered red flags.

Hey, the proponents of the private sector approach sound off, this would be akin to national Yelp-like service. Consumers could look at the red flags and decide for themselves whether they want to do business with companies that have racked up a slew of AI-oriented red flags. A bank that was getting tons of red flags about their AI would have to pay attention and revamp their AI systems, so the logic goes, else consumers would avoid the firm like the plague.

Whether this whole approach is undertaken by the government or by industry is just the tip of the iceberg on thorny questions facing the proposed Red Flag AI Laws postulate.

Put yourself into the shoes of a firm that developed or is using AI. It could be that consumers would raise red flags even though there was no viable basis for doing so. If people could freely post a red flag about the AI, they might be tempted to do so on a whim, or maybe for revenge against a firm that otherwise did nothing wrong toward the consumer.

In short, there could be a lot of false-positive Red Flags about AI.

Another consideration is the massive size or magnitude of the resulting red flags. There could easily be millions upon millions of red flags raised. Who is going to follow up on all those red flags? What would the cost be to do so? Who will pay for the red flag follow-up efforts? Etc.

If you were to say that anyone registering or reporting a red flag about AI has to pay a fee, you’ve entered into a murky and insidious realm. The concern would be that only the wealthy would be able to afford to raise red flags. This in turn implies that the impoverished would not be able to equally participate in the red flag activities and essentially have no venue for warning about adverse AI.

Just one more additional twist for now, namely that this kind of red flag laws or guidelines about AI seems to be after the fact rather than serving as a warning beforehand.

Returning to the Red Flag Traffic Laws, the emphasis of using a red flag was to avert having a calamity to start with. The red flag-waver was supposed to be way ahead of the upcoming car. By being ahead of the vehicle, the livestock would be alerted and those that guarded the livestock would know they should take precautions due to the soon-to-arrive disturbing source.

If people are only able to raise a red flag about AI that has seemingly already harmed or undercut their rights, the proverbial horse is already out of the barn. All that this would seem to accomplish is that hopefully other people coming along would now know to be wary of that AI system. Meanwhile, the person allegedly wronged has already suffered.

Some suggest that maybe we could allow people to raise red flags about AI that they suspect might be biased, even if they haven’t used the AI and weren’t directly impacted by the AI. Thus, the red flag gets waved before the damage is done.

Yikes, goes the retort, you are going to really make the AI-coping red flags into an entirely unmanageable and chaotic affair. If anyone for whatever reason can raise a red flag about an AI system, despite not having done anything at all with that AI, you will become inundated with red flags. Worse still, you won’t be able to discern the wheat from the chaff. The entire red flag approach will collapse under its own weight, taking down the goodness of the idea by allowing flotsam and riffraff to sink the entire ship.

Dizzying and confounding.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about Red Flag AI Laws, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Red Flag AI Laws

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

Let’s sketch out a scenario that might leverage a Red Flag AI Law.

You get into an AI-based self-driving car and wish to have the autonomous vehicle drive you to your local grocery store. During the relatively brief journey, the AI takes a route that seems to you to be somewhat amiss. Rather than going the most direct way, the AI navigates to out-of-the-way streets which causes the driving time to be higher than it normally could be.

What is going on?

Assuming that you are paying for the use of the self-driving car, you might be suspicious that the AI was programmed to drive a longer route to try and push up the fare or cost of the trip. Anyone that has ever taken a conventional human-driven cab knows of the trickery that can take place to get more dough on the meter. Of course, with people having GPS on their smartphones while riding in a cab or equivalent, you can readily catch a human driver that appears to be sneakily taking unnecessarily long routes.

Turns out that you have another concern about the route choice, something that really gnaws at you.

Suppose the routing was done to avoid certain parts of town due to racial facets. There are documented cases of human drivers that have been caught making those kinds of choices, see my discussion at the link here. Perhaps the AI has been programmed adversely thereof.

You decide to raise a red flag.

Let’s assume for sake of discussion that a Red Flag AI Law has been enacted that covers your jurisdiction. It might be local law, state law, federal or international law. For an analysis that I co-authored with Harvard’s Autonomous Vehicle Policy Initiative (AVPI) on the rising importance of local leadership when communities adopt the use of self-driving cars, see the link here.

So, you go online to a Red Flag AI database. In the incident database, you enter the information about the self-driving car journey. This includes the date and time of the driving trek, along with the brand and model of the self-driving car. You then enter the navigational route that appeared to be suspicious, and you are suggesting or maybe outright claiming that the AI was devised with biased or discriminatory intent and capacities.

We would have to speculate on the other particulars of the Red Flag AI Law as to what happens next in this particular scenario. In theory, there would be a provision for someone to review the red flag. They would presumably seek to get the automaker or self-driving tech firm to explain their point of view on the logged red flag. How many other such red flags have been registered? What outcomes did those red flags produce?

And so on it would go.

Conclusion

Preposterous, some skeptics exhort.

We don’t need Red Flag AI Laws, they sternly exert. Doing anything of the sort will gum up the works when it comes to the pace and progress of AI. Any such laws would be unwieldy. You would be creating a problem that doesn’t solve a problem. There are other ways to deal with AI that are bad. Do not blindly grasp at straws to cope with biased AI.

Shifting gears, we all know that bullfighters use red capes to apparently attract the attention of the angry bull. Though red is the color we most associate with this practice, you might be surprised to know that scientists say that bulls do not perceive the red color of the muleta (they are color-blind to red). The popular show MythBusters did a quite entertaining examination of this matter. The movement of the cape is the key element rather than the chosen color.

For those that cast aside the need for Red Flag AI Laws, a counterclaim is that we need something of a dramatic and unmistakable waving nature to make sure that AI developers and firms deploying AI will steer clear of biased or bad AI. If not for a red flag, maybe a fluttering cape or basically any kind of alerting approach might be within the realm of getting due consideration.

We know for sure that bad AI exists and that a lot more bad AI is going to be heading in our direction. Finding ways to protect ourselves from adverse AI is crucial. Likewise, setting guardrails to try and stop bad AI from getting into the world is equally important.

Ernest Hemingway famously stated that nobody ever lives their life all the way up except for bullfighters. We need to make sure that humans can live their life all the way, despite whatever AI badness or madness is promulgated upon us.

Source: https://www.forbes.com/sites/lanceeliot/2022/06/29/ai-ethics-skeptical-about-establishing-so-called-red-flag-ai-laws-for-calling-out-biased-algorithms-in-autonomous-ai-systems/