AI Ethics Alarmed At The Rise In Underhanded Juicing Or Doping Of AI Machine Learning By Trusted Insiders, Including Autonomous Self-Driving Cars

Juicing and doping.

Doping and juicing.

We all know about the ongoing and surreptitious use of performance-enhancing drugs that are sadly relied upon in various sports. This occurs in professional sports and even in amateur sports. It happens in the Olympics, which in theory is a globally revered contest that is supposed to be a purity exemplar of human performance limits and topmost extremes across all of humankind.

There is a kind of pervasiveness in the act of juicing and doping. Sports figures are under a tremendous amount of pressure to attain first place and they are alluringly tempted to use whatever means they can to get there. As a result of the likelihood of juicing or doping, we’ve seen that many if not most sports have instituted procedures and steps that aim to deter and catch those that wrongly abide in such endeavors. If someone is caught having juiced or doping, they risk having their sports medals revoked. Plus, they are likely to be cast aside by their supporters and sponsors. A tremendous sense of reputational risk goes hand-in-hand with the chancy act of juicing or doping.

People that are desirous of being the “best” at a particular sport are torn between not using performance-enhancing drugs and opting to use either illegal or at least unethical substances. Using the drugs can be an almost surefire way to the top. If administered sneakily and with careful attention, there is a chance that no one will know and the testing won’t detect it. You can get away with it, seemingly scot-free. Of course, there is also the possibility that you are harming your body and will eventually pay a physical price, but the intensity of desire for the in-the-moment opportunity at winning tends to downplay any future consequences.

So, we have on the one hand the grandiose potential for attaining great glory and maybe even wealth by using performance-enhancing drugs, while on the other hand, we have the inglorious chance of getting caught and being stripped of the otherwise hard-earned winnings and becoming a horribly despised worldwide public figure (along with the health-related adverse consequences too).

That is some kind of meaty cost-benefit analysis that needs to be made.

Some do the mental ROI (return on investment) calculation and decide to never touch any of the performance-enhancing drugs. They resolve to stay perfectly clean and pure. Others might start that way and then stray slightly. You might justify the slippage as just a tiny toe into the performance-enhancing waters and swear solemnly to yourself that you will never do so again. This though can lead to a slippery slope. The classic and predictable proverbial snowball that slips and slides and rolls down the snowy hillside, gathering into a bigger and bigger ball as it does so.

You’ve also got those that decide upfront they are going to go ahead and use performance-enhancing drugs. A typical mode of thought is that it is the only way to fight fire with fire. The assumption being made is that everyone else that you are competing with is doing likewise. As such, it makes absolutely no sense for you to be pure and yet go against those that are obviously impure (so you assume).

I think you can see why then the nature of the testing and detection is especially vital in these matters. If some participants can get away with the use of performance-enhancing drugs, it spoils the whole barrel. Inch by inch, all other participants will almost surely go that same path. They have to make a horrendous choice. This entails either competing without the drugs, but probably at a physical disadvantage, or they have to adopt the drugs and remain competitive, despite perhaps wanting with all their heart and might not have to resort to the performance enhancers.

A quandary, for sure.

There is more milieu that confounds these circumstances. For example, a question that arises continually is what in fact is a performance-enhancing drug. Authorities might come up with a list of the banned drugs. Meanwhile, in a cat and mouse gambit, other drugs are devised or identified that will provide performance enhancements and yet are not on the list of banned chemicals. You can try to keep a step ahead of the list, switching to other drugs and remaining slimly within the rules of the game.

The overarching gist is that juicing and doping is not a necessarily straightforward topic. Yes, we might all agree that juicing or doping is atrocious and should not be undertaken. Yes, we might all concur that there should be strict rules about not juicing and doping, along with strident efforts to catch those who stray. Unfortunately, there is a lot of trickery that can undermine those lofty goals.

Why have I shared with you the trials and tribulations of juicing and doping?

I do so for a reason that you might find startling, vexing, upsetting, and altogether heart-wrenching.

You see, there are increasingly vocal claims that AI is at times being “performance enhanced” via the use of juicing or doping (of a sort). The notion is that when devising an AI system, the developers might undertake somewhat underhanded ploys to get the AI to appear better than it really is. This in turn can trick others into assuming that the AI has capabilities that it truly does not have. The consequences can be mild or they can be dangerously dire.

Imagine an AI system that plays checkers that was (shall we say) “performance enhanced” to appear as though it will never lose a checker’s game. Some investors pile a ton of dough into the game, doing so under the false belief that AI will always win. After being placed into public use, the AI wins and wins. At some point, it perchance loses a game. Yikes, what happened? In any case, this is not likely to be a life-or-death consideration in this use case.

Imagine instead an AI system that drives a self-driving car. The AI is “performance enhanced” to seem as though it can drive safely and without incident. For a while, the self-driving car is used on public roadways and all seems fine. Lamentedly, at some point, the AI goes astray and a car crash occurs that was clearly the fault of the AI system. Humans might get injured and fatalities can arise. This is a situation whereby juicing or doping of the AI has sobering and serious life-or-death consequences.

I realize that you might be having some heartburn about referring to juicing and doping when it comes to AI. I say this because today’s AI is absolutely not sentient and we should be cautious in anthropomorphizing AI, which I’ll be elaborating about further shortly herein. In brief, AI is not a human and not anything close to being human, as yet. Trying to compare the two and align with the conventional conceptualization of juicing or doping is somewhat sketchy and should be done with our eyes wide open.

I am going to proceed with the suggested analogous idea of juicing and doping of AI, though I ask that you keep in mind that this is something that should not be carried too far. We can somewhat reasonably lean into the phrasing as a means of exposing aspects that I would argue are abundantly needed to be exposed. That’s a viable basis for using the catchphrases. But we ought to not stretch this into the nether realms and make this into something that it is not. I will say more about this momentarily.

One facet of AI that is getting the most attention about AI-related juicing and doping entails certain ways in which some developers are crafting AI-based Machine Learning (ML) and Deep Learning (DL) systems. There are lots of AI Ethics and Ethical AI ramifications pertaining to these kinds of nefarious actions during the development of ML/DL systems. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

Let’s also keep our heads above water and please underscore these facets as you proceed throughout this discussion:

  • Not everyone that is devising AI ML/DL is doing juicing or doping of the ML/DL
  • Some do so but aren’t especially aware of doing something wrong
  • Some do so and know exactly what they are doing as to juicing or doping the ML/DL
  • Unlike the sports field, there is very little formalized standardized across-the-board “testing or detecting” of these types of matters for contemporary ML/DL
  • The adverse consequences of doing this can vary significantly depending upon the nature of the ML/DL (e.g., AI playing checkers, AI driving a self-driving car).
  • Some argue there is nothing inherently improper about these acts
  • Definitions of what is or is not juicing or doping of ML/DL are all over the map
  • AI Ethics is grappling with how to best deal with the ostensibly emerging trend

I would like to clear up another twist on this topic. I ask that you bear with me on this. Somehow, some utterly misconstrue the matter and fall into an oddish way of thinking that the AI developers are themselves taking performance-enhancing drugs, and therefore this is a discussion about humans that are themselves doing juicing and doping.

That usually gets a bit of a chuckle from a number of AI developers.

To be abundantly clear, that is not what I am referring to. I am distinctly and solely focused herein on the so-called juicing and doping of AI itself, and not of the humans devising AI. That being said, I am not saying that it isn’t possible that there might be AI developers that somehow opt to take performance-enhancing drugs for whatever reasons they might opt to do so. It would seem doubtful that there is a fully suitable sports analogy comparable to the act of AI developers that perchance decide to take performance-enhancing illicit drugs, but I leave that to other researchers that might wish to explore that realm. I would simply say that taking any performance-enhancing drugs for whatever reasons is assuredly not prudent and could decidedly be illegal, unethical, and sorely ill-advised.

I trust that helps set things straight.

Before getting into some more meat and potatoes about the juicing and doping of AI, let’s establish some additional fundamentals on profoundly integral topics. We ought to take a breezy dive into the AI Ethics and ML/DL arena to set the stage appropriately.

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s now return to the topic of AI juicing or doping.

In a recent article in Science magazine, the juicing or doping of Machine Learning and Deep Learning came up in the context of efforts by AI developers seeking to attain high marks on ML/DL benchmarks: “The pursuit of high scores can lead to the AI equivalent of doping. Researchers often tweak and juice the models with special software settings or hardware that can vary from run to run on the benchmark, resulting in model performances that aren’t reproducible in the real world. Worse, researchers tend to cherry-pick among similar benchmarks until they find one where their model comes out on top” (Science, “Taught To The Test” by Matthew Hutson, May 2022).

You could liken the ML/DL benchmark situation involving juicing to the earlier points about trying to win at sports competitions via such untoward practices.

In the AI arena, there is a semblance of competition to see who can arrive at the “best” ML/DL models. Various benchmarks can be used to run an ML/DL and gauge how well the ML/DL seems to score on the benchmark. Leaderboards and informal sharing of benchmark results are often used to tout who has achieved the latest topmost position with their ML/DL configurations. You could reasonably suggest that a modicum of fame and fortune awaits those that can craft their ML/DL to be the “winner” as the latest and greatest performer on the benchmarks.

But just like any kind of competition, there are ways to try and trick out your ML/DL so that it seemingly performs mightily on a benchmark even though under-the-hood sneaks are being applied. This is the classic corruption of aiming to score well on a test by honing your approach to the test, whereas the general principle is supposed to be that you are trying to ascertain overall performance.

Imagine giving a test to someone that is intended to measure their overall understanding of say American literature, but the test taker figures out that the questions are only going to be focused on Mark Twain. Thus, the test taker studies only the works of Mark Twain and scores immensely on the test. The test taker proudly proclaims that they aced the test and ergo obviously is a brainiac about all of American literature. In reality, they merely honed in on the test and in a sense tricked the testing process.

I realize that some might right away point fingers at the test and whoever prepared the test. If the test maker was dense enough to allow test takers to exploit the test, you might argue that this is entirely on the shoulders of the test maker and not the test taker. The test taker did whatever they could to prepare for the test, including figuring out what would behoove them to study. This is not only seemingly allowed, you might congratulate the test taker for having outsmarted the test maker.

I won’t go further into that ethical abyss here. You can easily go round and round on such a topic. Let’s just say that the spirit of the ML/DL benchmarks is that those utilizing the benchmarks are hoped or presumed to do so in a sportsmanship manner. This might seem naïve to some, while it might seem aboveboard and proper to others.

I hope that you can immediately see how AI Ethics and Ethical AI considerations naturally arise in such a context.

Consider for example that a given ML/DL does extremely well on a benchmark and that the basis for the heightened score is due to juicing or doping of the AI. Suppose further that the AI developers of the “winning” ML/DL do not reveal that they juiced the AI. Other AI developers hear about or read the results of the ML/DL performance and become excited about a seeming breakthrough in AI ML/DL. They are woefully unaware of the hidden juicing or doping of the AI.

Those elated AI developers opt to switch their efforts over to the presumed approaches of that particular ML/DL in a desire to further extend the capabilities. At some point, perhaps they discover that they have hit a wall and to their unpleasant surprise they seem to be getting nowhere. This could be quite perplexing and exasperating. They have been toiling away for months or years on something that they did not realize had been juiced at the get-go. Again, I realize that you might wish to find fault with those now disappointed AI developers that were not apparently clever enough to earlier ferret out the juicing, but I dare say that we might also find concern that there were the juicers that started things down that path, to begin with.

All of this is certainly reminiscent of the sports analogy.

You’ve got a desire to win, seemingly at any cost. Some will aim to win without juicing, while others fully do the juicing. Those using juicing might rationalize the activity as being legitimate. Efforts to try and curtail or catch juicing are put in place, though the cat and mouse nature of the situation means that juicing is likely going to be a step ahead. When someone that is juicing gets caught, they risk the possibility of reputational backlash and other adverse consequences. They are constantly weighing the perceived upsides versus the perceived costs. And so on.

The tough thing about catching AI ML/DL juicing is that there is a myriad of ways to undertake the juicing or doping. One supposes you could say the same about sports and juicing, namely that a wide variety of means and performance enhancers can be used to try and stay under the radar.

Anyway, here are some broad categories to consider in the AI ML/DL juicing forays:

a) Juice at the Machine Learning and Deep Learning design stage

b) Juice the data used to train the ML/DL

c) Juice the ML/DL model

d) Juice the ML/DL outputs

e) Do any of the above two in combination

f) Do any of the above three in combination

g) Do all of the above

I’ve covered extensively the use of ML/DL best practices and likewise forewarned about the unsavory use of untoward ML/DL practices in my columns. You are encouraged to take a look if you’d like further details.

As a taste, let’s briefly consider the kind of juicing that can occur via the data that is used to train ML/DL. The usual rule of thumb is that you hold out some of your training data for purposes of testing your ML/DL model. A customary recommendation is to use an 80/20 rule. You use about 80% of your data for training the ML/DL. The remaining 20% is used to test the ML/DL. It is hoped that the 20% is relatively representative of the other 80% and that you would simply randomly choose which of your training data is in the training set and which is in the testing set.

Seems straightforward.

We will now do some juicing or doping:

  • Sneakily mirror your training data and testing data. One means to juice things would be to carefully scrutinize your data and try to purposely ensure that the 80% and the 20% are ideally aligned. You don’t randomly divide up the data. Instead, you do a secretive selection to try and get the 80% and the 20% to resemble each other to the letter. This is intended to make your testing come out looking extraordinarily good. In essence, if your ML/DL does well on the 80%, it is almost guaranteed to then do well on the 20%. Doing this is not in the spirit of things since you are potentially deluding yourself (and others) into believing that the ML/DL has computationally done an excellent job of generalizing. It might not have.
  • Shortchange the test data. Another way to juice your ML/DL dataset is to divide the training data such that it is say 95% of your data, while the holdout testing data is only 5%. This is likely to increase your odds that there is nothing in the paltry 5% that will undercut the ML/DL performance. Very few people would ever ask how much of your data was used for training versus testing. They don’t know to ask this question or assume that whatever you did was the proper way of doing things.
  • Get rid of outliers beforehand. A sly means of juicing or doping your ML/DL involves trickery about outliers in your data. Before you feed any of your data into the budding ML/DL, you first examine the data. This is a prudent step and highly recommended since you should be familiar with your data before you just slap it into an ML/DL. That being said, here’s the trickery that can be used. You find any outliers in the data and toss them out. This will usually aid the mathematics of the ML/DL when it is trying to computationally find patterns. Outliers are typically a pain to deal with, though they are oftentimes crucial and can tell a lot about the nature of the data and whatever you are trying to model. By blindly excising the outliers, you are bound to miss something that can make-or-break the reality of what the ML/DL is supposed to be able to do. A better practice is to pay attention to outliers and consider how best to contend with them, rather than summarily kicking them out of the dataset.
  • Do no testing at all. A more outrageous act of juicing or doping entails not doing any testing at all. You use all of your data for training. If things look good, you wave your hands in the air and declare that the ML/DL is good to go. In that sense, you are using a 100/0 rule-of-thumb, namely 100% of the data for training and 0% for the testing. I’m guessing that you might be shocked that anyone would do this. Well, some are so assured about the training results that they feel that no testing is required. Or they are in a rush and do not have time to deal with that “annoying” testing stuff. You get the picture.

I had earlier mentioned that juicing or doping of AI can be somewhat inconsequential if the nature of the AI itself is not especially paramount, while other settings might involve AI-directed life-or-death consequences and therefore the juicing is scarily a weak link and potential harbinger of grave doom.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the juicing or doping of AI, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Juicing Or Doping Of AI

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

We shall begin by heaping praise upon the use of ML/DL in the realm of bringing forth AI-based self-driving cars. Several key aspects of self-driving cars have come to fruition as a result of using Machine Learning and Deep Learning. For example, consider the core requirement of having to detect and analyze the driving scene that surrounds an AI-based self-driving car.

You’ve undoubtedly noticed that most self-driving cars have a myriad of mounted sensors on the autonomous vehicle. This is often done on the rooftop of the self-driving car. Sensor devices such as video cameras, LIDAR units, radar units, ultrasonic detectors, and the like are typically included on a rooftop rack or possibly affixed to the car top or sides of the vehicle. The array of sensors is intended to electronically collect data that can be used to figure out what exists in the driving scene.

The sensors collect data and feed the digitized data to onboard computers. Those computers can be a combination of general-purpose computing processors and specialized processors that are devised specifically to analyze sensory data. By and large, most of the sensory data computational analysis is undertaken by ML/DL that has been crafted for this purpose and is running on the vehicle’s onboard computing platforms. For my detailed explanations about how this works, see the link here and the link here, just to name a few.

The ML/DL computationally tries to find patterns in the data such as where the roadway is, where pedestrians are, where other nearby cars are, and so on. All of this is crucial to being able to have the self-driving car proceed ahead. Without the ML/DL performing the driving scene analysis, the self-driving car would be essentially blind as to what exists around the autonomous vehicle.

In brief, you can readily make the case that the use of ML/DL is essential to the emergence of AI-based self-driving cars.

Can you juice or do doping of ML/DL that pertains to AI-based self-driving cars?

Absolutely.

We can readily invoke the earlier stated examples of juicing or doping when it comes to the data aspects of ML/DL formulations. Keep in mind that the ML/DL that is being used to scan for pedestrians, cars, and other roadway objects was likely trained first on various datasets of driving scenes. This training of the ML/DL is instrumental in the AI driving system being able to properly and safely navigate the streets while commanding the autonomous vehicle driving controls.

Here’s what a juicing or doping effort could underhandedly do:

  • Sneakily mirror your training data and testing data. You collect together the dataset that is being used to train an ML/DL on roadway objects and purposely align the training portion and the testing portion. You abide by the rule of thumb about dividing the data into 80% for training and 20% for testing, which therefore seems like the right approach. The juicing is that you move around the data to make sure that the 80% and the 20% are astoundingly similar. You are stacking the deck in favor of whatever ML/DL you devise during the training.
  • Shortchange the test data. You divide the training data into 95% of the total dataset and put just 5% into the testing data portion. When the testing is undertaken, turns out you’ve reduced the chances of the ML/DL not looking good.
  • Get rid of outliers beforehand. While scrutinizing the data at the outset, you discover that there are instances of billboards that have pictures of people on them. You are concerned that this will confuse your ML/DL, so you remove those images or videos from your dataset. Once you’ve done the training and testing, you declare that your ML/DL is ready for use in the wild. Unfortunately, at some point, there is bound to be a situation whereby the self-driving car is going down a street or highway and there is a billboard with pictures of people on it. You don’t know how your ML/DL will react. It could be that the ML/DL warns that pedestrians are nearby and as a result the AI driving system suddenly slams on the brakes, prompting other nearby human-driven cars to ram into the self-driving car or drive off the roadway to avoid a collision.
  • Do no testing at all. You are in a hurry to get the ML/DL setup. Maybe the self-driving firm has put a date out that says when the self-driving car is going to be doing an important public demonstration. You don’t have much time to do things the right way. As such, you keep your fingers crossed and use all of the data for training. You do no testing at all. You have a sense of relief that you were able to meet the stated deadline. Of course, what happens next on the roadway could be a calamity in the making.

Conclusion

Generally, the bona fide self-driving car makers are quite cautious about allowing corner-cutting and taking chances by performing juicing or doping actions in their burgeoning AI driving systems. There is usually a slew of checks and balances to try and detect and correct any such actions. In addition, many of the firms have established somewhat rigorous AI Ethics precepts and alerting mechanisms to try and early on catch any slippages or underhandedness that might be happening, see my coverage at the link here.

Some of the fly-by-night attempts at putting together AI self-driving cars have opted to throw caution to the wind. They brazenly take any shortcuts they can think of. Furthermore, they put little stock in doing double-checking or trying to stop any juicing or doping. Some even use the classic of plausible deniability by merely instructing their AI developers to do “whatever they think is right” and then can, later on, proclaim that the firm did not know what juicing or doping of the AI was taking place. I’ve discussed these dangerous efforts in my columns.

In the case of self-driving cars, life or death is clearly on the line.

The added point is that if there is a chance of juicing or doping AI in the self-driving car realm, you have to wonder what might be permitted in the other less life-or-death realms that are relying on AI systems. The pressures to get the AI out the door soonest are immense. The pressures to make sure that the AI does the right things in the right way can be a lot less compelling. Sadly so.

Besides the ethical concerns about AI-related juicing and doping, I’ve continued to hammer away at the coming tsunamis of legal actions on these matters. When AI systems get away with atrocious activities, those that have devised and fielded the AI are ultimately going to find themselves being held accountable. We haven’t yet seen the rise of legal cases against those that make AI and that use AI in their businesses. Mark my words that the reckoning will inevitably arise, see my coverage at the link here.

Companies are going to be legally forced to open their doors to show how they put together their AI systems. What did they do during the design? What did they do during the data efforts? What did they do as part of the testing before release? All of this is going to shine a light on the possibility of unseen, under-the-hood AI juicing and doping.

There ought to not be a free lunch for those that opt to do AI juicing and doping. Be wary and keep your eyes open. Stand tall and insist on anti-juicing and anti-doping of AI.

We need clean AI, that’s for darned sure.

Source: https://www.forbes.com/sites/lanceeliot/2022/06/07/ai-ethics-alarmed-at-the-rise-in-underhanded-juicing-or-doping-of-ai-machine-learning-by-trusted-insiders-including-autonomous-self-driving-cars/