AI Ethics Flummoxed By Those Salting AI Ethicists That “Instigate” Ethical AI Practices

Salting has been in the news quite a bit lately.

I am not referring to the salt that you put into your food. Instead, I am bringing up the “salting” that is associated with a provocative and seemingly highly controversial practice associated with the interplay between labor and business.

You see, this kind of salting entails the circumstance whereby a person tries to get hired into a firm to ostensibly initiate or some might arguably say instigate the establishment of a labor union therein. The latest news accounts discussing this phenomenon point to firms such as Starbucks, Amazon, and other well-known and even lesser-known firms.

I will cover first the basics of salting and then will switch to an akin topic that you might be quite caught off-guard about, namely that there seems to be a kind of salting taking place in the field of Artificial Intelligence (AI). This has crucial AI Ethics considerations. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

Now, let’s get into the fundamentals of how salting typically works.

Suppose that a company does not have any unions in its labor force. How might a labor union somehow gain a foothold in that firm? One means would be to take action outside of the company and try to appeal to the workers that they should join a union. This might involve showcasing banners nearby to the company headquarters or sending the workers flyers or utilizing social media, and so on.

This is a decidedly outside-in type of approach.

Another avenue would be to spur from within a spark that might get the ball rolling. If at least one employee could be triggered as a cheerleader for embracing a labor union at the firm, perhaps this would start an eventual internal cavalcade of support for unionizing there. Even if such an employee wasn’t serving as an out-and-out cheerer, they might be quietly able to garner internal support among workers and be a relatively hidden force within the organization for pursuing unionization.

In that way of thinking, a labor union might contemplate the ways in which such an employee can be so activated. The union might expend endless energy to find that needle in the haystack. Among perhaps hundreds or thousands of workers at the firm, trying to discover the so-called chosen one, in particular, that will favor unionizing might be tough to do.

It would be handy to more readily “discover” that spark-inducing worker (or invent them, so to speak).

This leads us to the voila idea that maybe get the company to hire such a person for an everyday role in the firm. Essentially, implant the right kind of union spurring person into the firm. You don’t need to try and appeal to the throngs of workers all told from the outside and instead insert the one activating person so that you know for sure your spark is employed there.

The newly hired worker then seeks to instill a labor union interest within the firm, meanwhile doing whatever job they were otherwise hired to do (expressing what is often referred to as a “genuine interest” in the job). Note that the person is actively employed by the firm and actively doing work required of them as an employee. In the customary realm of salting, they are not solely a union-only non-specific job-related worker that perchance is embedded into the company.

Some have heralded this approach.

They exhort that it saves time and resources in terms of a union seeking to inspire workers at a firm to consider joining a union. Other employees are usually more likely to be willing to listen to and be activated by a fellow employee. The alternative approach of trying from the outside to gain traction is considered less alluring, whereby a fellow employee provides a powerful motivation to workers within the company in comparison to some “outsiders” that are seen as indeed little more than uninvolved and uncaring agenda-pushing outsiders.

Not everyone is happy with the salting approach.

Companies will often argue that this is an abundantly sneaky and dishonest practice. The overall gestalt of the approach is that a spy is being placed in the midst of the firm. That is not what the person was hired to do. They were presumably hired to do their stated job, while instead, the whole assorted shenanigans seem like the diabolical implanting of a veritable Trojan Horse.

The counterclaim by unions is that if the person is doing their stated job then there is no harm and no foul. Presumably, an employee, or shall we say any employee of the firm, can usually choose to seek unionization. This particular employee just so happens to want to do so. The fact that they came into the company with that notion in mind is merely something that any newly hired employee might likewise be considering.

Wait for a second, businesses will retort, this is someone that by design wanted to come to the company for purposes of starting a union foothold. That is their driven desire. The newly hired employee has made a mockery of the hiring process and unduly exploits their job-seeking aspirations as a cloaked pretense to the specific advantage of the union.

Round and round this heated discourse goes.

Keep in mind that there are a plethora of legal considerations that arise in these settings. All manner of rules and regulations that pertain for example to the National Labor Relations Act (NLRA) and the National Labor Relations Board (NRLB) are part of these gambits. I don’t want you to get the impression that things are straightforward on these fronts. Numerous legal complications abound.

We should also ponder the variety of variations that come to play with salting.

Take the possibility that the person wishing to get hired is openly an advocate of the union throughout the process of seeking to get a job at the firm. This person might show up to the job interview wearing a shirt or other garb that plainly makes clear they are pro-union. They might during interviews bring up their hope that the company will someday embrace unionization. Etc.

In that case, some would assert that the business knew what it was getting into. From the get-go, the company had plenty of indications about the intentions of the person. You can’t then whine afterward if upon being hired that the new employee will do whatever they can to get the union in the door. The firm has shot its own foot, as it were, and anything else is merely crocodile tears.

The dance on this though is again more complex than it seems. Per legal issues that can arise, someone that is otherwise qualified for getting hired could if turned down by the hiring company argue that they were intentionally overlooked as a result of an anti-union bias by the company. Once again, the NRLA and NRLB get drawn into the messy affair.

I’ll quickly run you through a slew of other considerations that arise in the salting realm. I’d also like you to be aware that salting is not solely a US-only phenomenon. It can occur in other countries too. Of course, the laws and practices of countries differ dramatically, and thus salting is either not especially useful or possibly even outright banned in some locales, while the nature of salting might be significantly altered based on the legal and cultural mores thereof and could in fact still have potency.

Consult with your beloved labor laws attorney in whatever jurisdiction of interest concerns you.

Some additional factors about salting include:

  • Getting Paid. Sometimes the person is being paid by the union to carry out the task of getting hired at the firm. They might then be paid by both the company and the union during their tenure at the firm or might no longer get paid by the union once hired by the firm.
  • Visibility. Sometimes the person keeps on the down-low or remains altogether quiet during the hiring process about their unionizing intentions, while in other instances the person is overtly vocal about what they intend to do. A seemingly halfway approach is that the person will tell what they are aiming to do if explicitly asked during the interviews, and thus imply that it is up to the firm to ferret out such intentions, which is a burden that firms argue is underhandedly conniving and strains legal bounds.
  • Timing. The person once hired might opt to wait to undertake their unionizing capacity. They could potentially wait weeks, months, or even years to activate. The odds are though they will more likely get started once they have become acclimated to the firm and have established a personal foothold as an employee of the firm. If they start immediately, this could undercut their attempt to be seen as an insider and cast them as an intruder or outsider.
  • Steps Taken. Sometimes the person will explicitly announce within the firm that they are now seeking to embrace unionization, which could happen shortly after getting hired or occur a while afterward (as per my above indication about the timing factor). On the other hand, the person might choose to serve in an undercover role, feeding information to the union and not bringing any attention to themselves. This is at times lambasted as being a salting mole, though others would emphasize that the person might be otherwise subject to internal risks if they speak out directly.
  • Tenure. A person taking on a salting effort might end up being able to get a unionizing impetus underway (they are a “salter”). They could potentially remain at the firm throughout the unionization process. That being said, sometimes such a person chooses to leave the firm that has been sparked and opt to go to another firm to start anew the sparking activities. Arguments over this are intense. One viewpoint is that this clearly demonstrates that the person didn’t have in their heart the job at the firm. The contrasting viewpoint is that they are likely to find themselves in murky and possibly untenable waters by remaining in the firm once the union bolstering effort has gotten traction.
  • Outcome. A salting attempt does not guarantee a particular outcome. It could be that the person does raise awareness about unionization and the effort gets underway, ergo “successful” salting has taken place. Another outcome is that the person is unable to get any such traction. They either then give up the pursuit and remain at the firm, perhaps waiting for another chance at a later time, or they leave the firm and typically seek to do the salting at some other company.
  • Professional Salter. Some people consider themselves strong advocates of salting and they take pride in serving as a salter, as it were. They repeatedly do the salting, going from firm to firm as they do so. Others will do this on a one-time basis, maybe because of a particular preference or to see what it is like, and then choose not to repeat in such a role. You can assuredly imagine the types of personal pressures and potential stress that can occur when in a salter capacity.

Those factors will be sufficient for now to highlight the range and dynamics of salting. I will revisit those factors in the context of AI and Ethical AI considerations.

The gist is that some people seek to get hired into a firm to initiate or instigate the establishment of AI Ethics principles in the company. This is their primary motivation for going to work at the firm.

In a sense, they are salting not for the purposes of unionization but instead “salting” to try and get a company rooted in Ethical AI precepts.

I will say a lot more about this momentarily.

Before getting into some more meat and potatoes about the wild and woolly considerations underlying salting in an AI context, let’s lay out some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).

You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.

One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.

First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s also make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

Let’s keep things more down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

Let’s return to our focus on salting in an AI context.

First, we are removing any semblance of the unionization element from the terminology of salting and instead only using salting as a generalized paradigm or approach as a template. So, please put aside the union-related facets for purposes of this AI-related salting discussion.

Second, as earlier mentioned, salting in this AI context entails that some people might seek to get hired into a firm to initiate or instigate the establishment of AI Ethics principles in the company. This is their primary motivation for going to work at the firm.

To clarify, there are absolutely many that get hired into a firm and they already have in mind that AI Ethics is important. This though is not at the forefront of their basis for trying to get hired by the particular firm of interest. In essence, they are going to be hired to do some kind of AI development or deployment job, and for which they bring handily within them a strident belief in Ethical AI.

They then will work as best they can to infuse or inspire AI Ethics considerations in the company. Good for them. We need more that have that as a keenly heartfelt desire.

But that isn’t the salting that I am alluding to herein. Imagine that someone picks out a particular company that seems to not be doing much if anything related to embracing AI Ethics. The person decides that they are going to get hired by that firm if they can do so in some everyday AI job (or maybe even a non-AI role), and then their primary focus will be to install or instigate AI Ethics principles in the company. That is not their primary job duty and not even listed within their job duties (I mention this because, obviously, if one is hired to intentionally bring about AI Ethics they are not “salting” in the manner of connotation and semblance herein).

This person doesn’t especially care about the job per se. Sure, they will do whatever the job consists of, and they presumably are suitably qualified to do so. Meanwhile, their real agenda is to spur Ethical AI to become part and parcel of the firm. That is the mission. That is the goal. The job itself is merely a means or vehicle to allow them to do so from within.

You might say that they could do the same from outside the firm. They could try to lobby the AI teams at the company to become more involved with AI Ethics. They might try to shame the firm into doing so, perhaps by posting on blogs or taking other steps. And so on. The thing is, they would still be an outsider, just as earlier pointed out when discussing the overarching premise of salting.

Is the AI salting person being deceitful?

We are again reminded of the same question asked about the union context of salting. The person might insist there is no deceit at all. They got hired to do a job. They are doing the job. It just so happens that in addition they are an internal advocate for AI Ethics and working mightily to get others to do the same. No harm, no foul.

They would likely also point out that there isn’t any particular downside to their spurring the firm toward Ethical AI. In the end, this will aid the company in potentially avoiding lawsuits that otherwise might arise if the AI is being produced that is not abiding by AI Ethics precepts. They are thusly saving the company from itself. Even though the person perhaps doesn’t especially care about doing the job at hand, they are doing the job and simultaneously making the company wiser and more secure via a vociferous push toward Ethical AI.

Wait for a second, some retort, this person is being disingenuous. They are seemingly going to jump ship once the AI Ethics embracement occurs. Their heart is not in the firm nor the job. They are using the company to advance their own agenda. Sure, the agenda seems good enough, seeking to get Ethical AI on top of mind, but this can go too far.

You see, the argument further goes that the AI Ethics pursuit might become overly zealous. If the person came to get Ethical AI initiated, they might not look at a bigger picture of what the firm overall is dealing with. To the exclusion of all else, this person might myopically be distracting the firm and not be willing to allow for AI Ethics adoption on a reasoned basis and at a prudent pace.

They might become a disruptive malcontent that just continually bickers about where the firm sits in terms of Ethical AI precepts. Other AI developers might be distracted by the single-tune chatter. Getting AI Ethics into the mix is certainly sensible, though theatrics and other potential disruptions within the firm can stymy Ethical AI progress rather than aid it.

Round and round we go.

We can now revisit those additional factors about salting that I previously proffered:

  • Getting Paid. It is conceivable that the person might be initially paid by some entity that wants to get a firm to embrace AI Ethics, perhaps aiming to do so innocuously or maybe to sell the firm a particular set of AI Ethics tools or practices. Generally unlikely, but worth mentioning.
  • Visibility. The person might not especially bring up their AI Ethics devotional mission when going through the hiring process. In other instances, they might make sure it is front and center, such that the hiring firm understands without any ambiguity regarding their devout focus. This though is more likely to be couched as though AI Ethics is a secondary concern and that the job is their primary concern, rather than the other way around.
  • Timing. The person once hired might opt to wait to undertake their AI Ethics commencements. They could potentially wait weeks, months, or even years to activate. The odds are though they will more likely get started once they have become acclimated to the firm and have established a personal foothold as an employee of the firm. If they start immediately, this could undercut their attempt to be seen as an insider and cast them as an intruder or outsider.
  • Steps Taken. Sometimes the person will explicitly announce within the firm that they are now seeking to raise attention to AI Ethics, which could happen shortly after getting hired or occur a while afterward (as per my above indication about the timing factor). On the other hand, the person might choose to serve in an undercover role, working quietly within the firm and not bringing particular attention to themselves. They might also feed information to the press and other outsiders about what AI Ethics omissions or failings are taking place within the firm.
  • Tenure. A person taking on a salting effort might end up being able to get an AI Ethics impetus underway. They could potentially remain at the firm throughout the Ethical AI adoption process. That being said, sometimes such a person chooses to leave the firm that has been sparked and opt to go to another firm to start anew the sparking activities. Arguments over this are intense. One viewpoint is that this clearly demonstrates that the person didn’t have in their heart the job at the firm. The contrasting viewpoint is that they are likely to find themselves in murky and possibly untenable waters by remaining in the firm if they are now labeled as loud voices or troublemakers.
  • Outcome. A salting attempt does not guarantee a particular outcome. It could be that the person does raise awareness about Ethical AI and the effort gets underway, ergo “successful” salting has taken place. Another outcome is that the person is unable to get any such traction. They either then give up the pursuit and remain at the firm, perhaps waiting for another chance at a later time, or they leave the firm and typically seek to do the salting at some other company.
  • Professional Salter. Some people might consider themselves a strong advocate of AI Ethics salting and they take pride in serving as a salter, as it were. They repeatedly do the salting, going from firm to firm as they do so. Others might do this on a one-time basis, maybe because of a particular preference or to see what it is like, and then choose not to repeat in such a role. You can assuredly imagine the types of personal pressures and potential stress that can occur when in a salter capacity.

Whether this kind of AI Ethics oriented salting catches on will remain to be seen. If firms are slow to foster Ethical AI, this might cause fervent AI Ethicists to take on salting endeavors. They might not quite realize directly that they are doing salting. In other words, someone goes to company X and tries to get traction for AI Ethics, perhaps does so, and realizes they ought to do the same elsewhere. They then shift over to company Y. Rinse and repeat.

Again, the emphasis though is that AI Ethics embracement is their topmost priority. Landing the job is secondary or not even especially important, other than being able to get inside and do the insider efforts of salting related to Ethical AI.

I’ll add too that those that study and analyze AI Ethics aspects now have a somewhat new addition to the topics of Ethical AI research pursuits:

  • Should these AI Ethics salting efforts be overall condoned or shunned?
  • What drives those that would wish to perform salting in this AI context?
  • How should businesses react to a perceived act of AI context salting?
  • Will there be methodologies devised to encourage AI-related salting like this?
  • Etc.

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic. The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including (perhaps surprisingly or ironically) the assessment of how AI Ethics gets adopted by firms.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI-related salting, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Ethics Salting

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I hope that provides a sufficient litany of caveats to underlie what I am about to relate.

Let’s sketch out a scenario that showcases an AI-related salting situation.

An automaker that is striving toward the development of fully autonomous self-driving cars is rushing ahead with public roadway tryouts. The firm is under a great deal of pressure to do so. They are being watched by the marketplace and if they don’t seem to be at the leading edge of self-driving car development their share price suffers accordingly. In addition, they have already invested billions of dollars and investors are getting impatient for the day that the company is able to announce that their self-driving cars are ready for everyday commercial use.

An AI developer is closely watching from afar the efforts of the automaker. Reported instances of the AI driving system getting confused or making mistakes are increasingly being seen in the news. Various instances include collisions with other cars, collisions with bike riders, and other dour incidents.

The firm generally tries to keep this hush-hush. The AI developer has privately spoken with some of the engineers at the firm and learned that AI Ethics precepts are only being given lip service, at best. For my coverage on such matters of shirking Ethical AI by businesses, see the link here.

What is this AI developer going to do?

They feel compelled to do something.

Let’s do a bit of a forking effort and consider two paths that each might be undertaken by this AI developer.

One path is that the AI developer takes to the media to try and bring to light the seeming lack of suitable attention to AI Ethics precepts by the automaker. Maybe this concerned AI specialist opts to write blogs or create vlogs to highlight these concerns. Another possibility is they get an existing member of the AI team to become a kind of whistleblower, a topic I’ve covered at the link here.

This is decidedly a considered outsider approach by this AI developer.

Another path is that the AI developer believes in their gut that they might be able to get more done from within the firm. The skill set of the AI developer is well-tuned in AI facets involving self-driving cars and they can readily apply for the posted AI engineer job openings at the company. The AI developer decides to do so. Furthermore, the impetus is solely concentrated on getting the automaker to be more serious about Ethical AI. The job itself doesn’t matter particularly to this AI developer, other than they will now be able to work persuasively from within.

It could be that the AI developer gets the job but then discovers there is tremendous internal resistance and the Ethical AI striving goal is pointless. The person leaves the company and decides to aim at another automaker that might be more willing to grasp what the AI developer aims to achieve. Once again, they are doing so to pointedly attain the AI Ethics considerations and not for the mainstay of whatever the AI job consists of.

Conclusion

The notion of referring to these AI-related efforts as a form of salting is bound to cause some to have heartburn about overusing an already established piece of terminology or vocabulary. Salting is pretty much entrenched in the unionization activities related to labor and business. Attempts to overload the word with these other kinds of seeming akin activities though of an entirely unrelated-to-unionization nature is potentially misleading and confounding.

Suppose we come up with a different phraseology.

Peppering?

Well, that doesn’t seem to invoke quite the same sentiment as salting. It would be an uphill battle to try and get that stipulated and included in our everyday lexicon of language.

Whatever we come up with, and whatever naming or catchphrase seems suitable, we know one thing for sure. Trying to get firms to embrace AI Ethics is still an uphill battle. We need to try. The trying has to be done in the right ways.

Seems like no matter what side of the fence you fall on, we need to take that admonition with a suitable grain of salt.

Source: https://www.forbes.com/sites/lanceeliot/2022/08/13/ai-ethics-flummoxed-by-those-salting-ai-ethicists-that-instigate-ethical-ai-practices/