AI Ethics Confronting Whether Irate Humans That Violently Smash Or Mistreat AI Is Alarmingly Immoral, Such As Those Angered Folks That Lash Out At Fully Autonomous AI Systems

That’s why we can’t have nice things.

You’ve likely heard or seen that quite popular expression and know instantly what it alludes to. Believe it or not, the clever piece of sage wisdom seemingly dates back to at least 1905 when a similar phrasing appeared in The Humanitarian Review by Eliza Blven. Generally, the gist of the insight is that sometimes we end up smashing, bashing, breaking, or altogether ruining objects or artifacts that otherwise seem undeserving of being so treated.

You might say that we at times mistreat objects and artifacts, even ones that we had supposedly adored or treasured.

This can happen by accident, such as being careless and dropping your cherished smartphone into the commode (regrettably, this is one of the most frequently cited ways in which smartphones become unusable). On the other hand, perhaps in a fit of rage, you opt to throw your smartphone across the room and it smacks into a hefty piece of furniture or rams directly into a wall. The odds are that the display will be cracked and the electronic guts are bound to no longer function properly.

That fit of rage could have had nothing whatsoever to do with the smartphone per se. Maybe you were arguing with someone and just so happened to take your anger out on the thing that perchance was in your hand at the time. The smartphone was merely in the wrong place at the wrong time.

There are though occasions when the object does somehow relate to the raging outburst. For example, you are desperately waiting for an important call and your smartphone surprisingly stops working. What frustration! This darned such and such smartphone always seems to give out at the worst of times, you think to yourself. Well, by heck, the smartphone is going to pay for this latest offense by being summarily thrown across the room. Take that, you no good smartphone.

Does rage always need to be a component?

Perhaps you calmly decided that your smartphone has reached the end of its utility. You are going to get a new one. Thus, the existing smartphone has a diminished value. You could of course try to do a trade-in of the somewhat outdated smartphone, but perhaps you instead make a conscious decision that you would rather have some fun and see how much physical abasement it can take. So, after a thoughtful amount of reasoning, you stridently hurl the device across the room and observe what happens. It is just a kind of physics experiment, allowing you to gauge how well-built the smartphone is.

I doubt that many of us use that kind of carefully tuned logic when we take out our aggression on an object or artifact. More often, the act is probably done within a different frame of mind. This would seem to be one of those spur-of-the-moment reactionary types of actions. Afterward, you might regret what you did and ponder what led to such an outburst.

What does this kind of fierce act toward an inanimate object potentially tell us about the person that undertakes such a brazen and seemingly untoward action?

The object itself is not presumably purposely trying to foul you up. When your toaster doesn’t properly toast your bread, it is hard to imagine that the toaster woke up that day with the thought that it will seek to mess up your breakfast by burning your toast. This is a bit unlikely. The toaster is merely a mechanical device. It works or it doesn’t work. But the idea that the toaster was scheming to not work or put a fast one on you by working against your wishes, well, that’s an outstretched notion.

There are some that believe all objects have a semblance of karma or spirit. In that theory, one supposes that the toaster could be seeking revenge if perchance you had not been earlier properly caring for the toaster. Though that’s an interesting philosophical idea, I’m going to skip past that metaphysical conceptualization and stay with the more everyday assumption that objects are just objects (for clarification, I’m not proffering a decision about the other possibility, just setting it aside for the moment).

This side tangent about karma or spirit was worthy since it brings up a related facet regarding human behavior. You see, we might be tempted to ascribe a form of liveliness to objects that are closer to what we generally consider to be sentient-like.

A ten-dollar toaster that is a barebones device is hardly something that we might tend to anoint with a sentient-like aura. You could do so if you wanted to, but this is a stretch. You might as well start assigning sentience to all manner of objects, such as a chair, a light pole, a fire hydrant, etc. It would seem that the object should have more innate capabilities if we are going to “reasonably” assign a sentient-like glow to the thing.

When you use Alexa or Siri, the device itself is merely a speaker and a microphone, yet this modern-day convenience could certainly be a better candidate for ascribing sentient-like powers. You apparently can interact with the device and carry on a conversation, though admittedly a choppy one and lacking in the fluidity of normal human-oriented interactions. Nonetheless, there is a particular easiness to allowing Alexa or Siri to slip toward sentient-like assignability (see my indication of the recent case of Alexa providing advice about putting a penny into a live electrical socket, at this link here).

Suppose we embellish the toaster with the likes of Natural Language Processing (NLP), akin to Alexa or Siri. You can speak to your toaster and tell it what amount of the desired toasting you want it to do. The toaster will respond to your utterance and then tell you when the toast is ready. This would seem to readjust our belief that the toaster is in fact getting closer to a sentient-like capacity.

The closer we seem to push the features of a device toward the characteristics of human facilities would equally lead us down the path toward ascribing sentient-like properties to the device. The most obvious of these would be robots. Any state-of-the-art walking and talking robot is bound to invoke our inner impression that the device is more than merely a mechanical or electronic contrivance.

Let me ask you a question and please answer honestly.

Before I do so, I am guessing that you’ve somehow seen those viral videos showcasing rather fancy robots that can walk, crawl, hop, or run. In some of those videos, a human is standing nearby and at first, seems to be poised there to catch the robot if it falters. I’d bet that most of us think of that as a kindly act, similar to when a toddler is learning to walk and being there to catch the youngster before they smack their head onto the floor.

You rarely though see the humans catching the robots, and instead, you see the humans whacking the robots to see what the robot will next do. Sometimes a long stick is used, perhaps a hockey stick or a baseball bat. The human purposely and without any shame will strike out at the robot. The robot takes a beating, one might contend, and we wait to see how the robot will react.

Here’s your question.

When you see the robot getting summarily thumped, do you feel bad for the robot?

A lot of people do. When such videos first began getting posted, thousands of comments expressed fury at the mistreatment of the robots. What did the robot do to deserve this kind of adverse abuse, people asked fervently? Those humans ought to be taken out and given a few kicks themselves, some indignantly stated. Stop this and vociferously take down any such videos.

You could easily feel somewhat the same about a ten-dollar barebones toaster, yet it probably would not invoke the same visceral and shocking concerns. It would seem to be the case that the closer an object gets in a spectrum of the completely inanimate object of no resemblance to human capacities toward objects that more closely resemble human sentience would jar our sensibilities of wanting to ascribe humanity-like morality for the object.

Let’s further unpack that.

If you own a smartphone and want to break it, and if doing so doesn’t harm anyone else, it would seem that we morally would have little if any objection to such an act. You own it, you can do with it what you will (assuming that the act doesn’t impinge on others).

Of course, we might think it foolish on your part, and this might have a spillover. If you are willing to destroy your smartphone, what else might you do? Perhaps the destructive and seemingly senseless act is a forewarning of something within you of a much worse potentiality. In that way of thinking, we aren’t as concerned about the smartphone as we are about how your actions regarding the smartphone are a reflection of you and your behaviors.

In the case of the humans that are poking and pushing at the walking or crawling robots, you probably are relieved when you discover that those humans are experimenters that are being paid or otherwise professionally striking the robots for generally valid reasons. They are trying to see how well the robot and the AI underlying the robot can cope with disruptive occurrences.

Imagine that someone has written an AI program to aid in a robot being able to walk or crawl. They would logically want to know how well the AI does when the robot goes astray and trips over something. Can the robot balance itself or rebalance as needed? By having humans nearby, the robots can be tested by getting poked or prodded. It is all in the name of science, as they say.

Once you understand that caveat about why the humans are “mistreating” the robots, you likely withdraw your ire. You might still have a lingering qualm, since seeing a human-like construct getting struck is reminiscent of humans or animals getting struck. You know though that the robot doesn’t “feel” anything, yet the actions are still somewhat personally painful to watch (for more insights about the sense of affinity that humans have toward AI systems such as robots, including a phenomenon known as the uncanny valley, see my discussion at this link here).

Those in the field of AI ethics are examining the moral psychological conundrum that we experience when AI systems are being treated harshly. One of the topmost concerns is that those that perform such “mistreatment” might be inuring all of us to be less sensitive to the mistreatment of all kinds, including and dangerously the slippery slope of a willingness to mistreat fellow humans.

In a recent research study published in the AI And Ethics Journal entitled “Socio-Cognitive Biases In Folk AI Ethics And Risk Discourse,” the researchers describe the sobering matter this way: “The same phenomenon can become a moral psychological problem during the era of AIs and robots. When our everyday reality is populated by various intelligent systems which lack the status of moral patiency, people might become accustomed towards cruelty and indifference. Because we sometimes think of robots as if they were alive and conscious, we may implicitly adopt patterns of behavior that could negatively affect our relationships with other people” (article co-authored by Michael Laakasuo, Volo Herzon, Silva Perander, Marianna Drosinou, Jukka Sundvall, Jussi Palomaki, and Aku Visala).

Bottom-line is that we might find ourselves inexorably accepting that mistreatment is pretty much okay to undertake, regardless of whether toward an object such as an AI-based robot or a living breathing human being. You can add to this list the potentiality of increasing mistreatment toward animals too. All in all, the floodgates of mistreatment can be a dour tsunami that will perilously drench everything that we do.

Inch by inch, we will get used to the maltreatment of AI systems, and this in turn will inch by inch reduce our repulsion of mistreatment all told.

That’s an Ethical AI theory that is being closely examined. This is especially timely now since the AI systems that are being crafted and fielded are looking and acting more akin to human capacities than ever before. AI is veering toward being made to look like human sentience, therefore we are potentially shifting further down the hair-rasing dire spectrum of mistreatment.

As I’ll elaborate shortly, there is a tendency for us to anthropomorphize AI systems. We construe the human-like appearing AI to be equated to human aspects, despite the fact that there isn’t any AI today that is sentient and we don’t yet know whether sentience will ever be reached. Will people fall into a mental trap of accepting the mistreatment toward AI as though it is a green light to enable furtherance of mistreatment toward humans and animals (any living beings)?

Some argue that we need to nip this in the bud.

Tell people that should not be mistreating AI systems. Even those experimenters with the walking and crawling robots were doing a disservice by seemingly gleefully displaying the videos of their efforts. It is another brick in the wall of undercutting societal views about mistreatment. Do not let the snowball start rolling down the acrimonious snowy hill.

Insist that we treat everything with due respect, including objects and artifacts. And, especially when those objects or artifacts have a bearing or resemblance to human form. If we cannot stop those people that want to throw their smartphone against a wall, so be it, but when they seek to smash a robot or do akin mistreatment of any device that has a robust human-like aura, we must put our foot down.

Hogwash, some retort with great disdain.

There is no connection between how people treat an AI system and the idea that they will somehow change how they treat humans and animals. Those are two different topics. Do not conflate them, the counterargument goes.

People are smart enough to keep separate the actions toward objects versus their actions toward living beings. You are doing handwaving by trying to connect those dots. It seems a similar concern arose about growing up using video games that allowed players to shoot and destroy video characters. In that case, it presumably was worse than harming AI robots since the video game would at times showcase video characters that utterly resembled humans.

The counter to that counterargument is that video games are not dealing with real objects. The player knows they are immersed in a dreamscape. That’s a far cry from throwing a smartphone across a room or whacking a crawling robot with a stick. Besides, there is research that does support the qualms of how video game play can spill over into real-world behaviors.

AI ethics is exploring the drivers of human behavior and how the advent of relatively sophisticated AI-based systems will be impacted, especially in light of the at times mistreatment by humans of such AI systems. Speaking of driving (yes, I snuck that in there), this allows me to shift into the topic of AI-based true self-driving cars, which will fit nicely into this overall theme.

You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about object or artifact mistreatments?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAADA
S (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Animosity By Humans

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and ethical AI questions surrounding our potential maltreatment toward those vaunted autonomous vehicles.

First, you might naturally assume that nobody would mistreat an AI-based self-driving car.

This seems logical. We generally accept the idea that one of the key benefits of having self-driving cars is that they will get involved in many fewer car crashes than human-driven cars. The AI won’t drink and drive. The AI won’t watch cat videos while at the wheel. There are about 40,000 annual fatalities in the United States alone each year due to car crashes, and about 2.5 million injuries, much of which are anticipated to no longer occur once we have a prevalence of self-driving cars on our roadways.

What’s to not like about self-driving cars, you might be saying to yourself.

Well, the list is rather extensive, see my coverage at this link here, but due to space constraints herein I’ll just cover a few of the more notably stated undesirable aspects.

As I’ve previously mentioned in my columns, there have been instances of people throwing rocks at passing self-driving cars, and reportedly placing metal objects such as nails on the street to puncture the tires of self-driving cars. This was done for various claimed reasons. One is that the people in the area where the self-driving cars were roaming had been concerned that the AI driving systems weren’t ready for prime time.

The concern was that the AI driving system might go awry, perhaps running over a child darting across the street or striking a beloved pet dog that perchance was meandering in the roadway. Per the earlier point about us being seemingly treated as guinea pigs, the belief was that insufficient testing and preparation had taken place and that self-driving cars were inappropriately being let loose. The attempts to curtail the tryouts were being done as a public display of angst over the self-driving cars being legally allowed to roam around.

There might have been other reasons mixed into the instances. For example, some have suggested that human drivers that rely upon earning a living via ridesharing were worried that AI was on the verge of replacing them. This was a threat to their livelihoods. You certainly know that the emergence of AI-based true self-driving cars is still a long way off and the worker displacement issue is not an immediate one. Ergo, it would seem that the rock-throwing and other incidents were probably more so about safety concerns.

For our purposes in this theme about AI mistreatment, the question arises as to whether a willingness to take these somewhat destructive acts against AI-based self-driving cars is an early indicator of the slippery slope from mistreating AI to mistreating humans.

Hold onto that thought.

Another angle of considered mistreatment of AI-based self-driving cars consists of the “bullying” that some human drivers and even pedestrians have aimed at those autonomous vehicles, see my analyses at the link here and the link here.

In short, human drivers that are driving and perchance come across a self-driving car are at times opting to play driving-related tricks on driverless cars. This trickery is sometimes done simply for the fun of it, but more so the basis is usually due to frustrations and exasperation about today’s AI driving systems.

Most of the AI driving systems are programmed to drive in a strictly legal way. Human drivers do not necessarily drive in a strictly legal manner. For example, human drivers often drive above the posted speed limit, doing so in at times the most egregious of ways. When human drivers are behind a self-driving car, the human driver finds themselves stymied by the “slowpoke” AI driving system.

People that live in areas currently well-populated with self-driving cars are apt to immediately get upset when they see a self-driving car up ahead of them. They know the autonomous vehicle will make their driving journey longer than it needs to be. So, such drivers will opt to be aggressive toward the self-driving car.

Drivers know that they can scoot around the self-driving car and cut it off. The AI driving system will merely slow down the autonomous vehicle, and it won’t react in any road rage fashion. If a human driver tried to do the same aggressive move to another human driver, the odds are that retribution would almost surely arise. To some degree, human drivers moderate their aggressive driving based on the realization that the aggrieved driver might retaliate.

Will this type of human driving behavior toward AI-based self-driving cars open Pandora’s box of bad driving behaviors all told?

Conclusion

We have placed on table two general instances of people seemingly mistreating AI-based self-driving cars. The first example involved throwing rocks and trying to thwart the use of self-driving cars on the roadways. The second example entailed driving aggressively at self-driving cars.

This brings up at least these concerns:

  • Will the emergence of such mistreatment extend over into human-driven cars?
  • If this continues or extends further, will such mistreatment spillover into other aspects of human endeavors?

One response is that these are merely temporary reactions to AI-based self-driving cars. If the public can be convinced that self-driving cars are safely operating on our roadways, the rock-throwing and such onerous acts will pretty much disappear (which, by the way, has appeared to so subside). If the AI driving systems can be improved to be less of a bottleneck on our roadways, nearby human drivers might be less inclined to be aggressive toward self-driving cars.

The focus throughout this discussion has been that mistreatment presumably begets mistreatment. The more you mistreat, such as mistreating AI, the more that mistreating becomes accepted and undertaken, such as against humans and animals.

Believe it or not, there is another side to that coin, though some view this as an optimistic happy face twist on the matter.

It is this notably upbeat proposition: Perhaps proper treatment begets proper treatment.

Here’s what I mean.

Some pundits suggest that since AI driving systems are programmed to drive legally and carefully, it could be that human drivers will learn from this and decide to drive more sanely. When the other cars around you are strictly abiding by the speed limit, maybe you will too. When those self-driving cars are making full stops at Stop signs and not trying to run red lights, human drivers will be similarly inspired to drive mindfully.

A skeptic would find that line of thinking to be thin or maybe even blissfully wide-eyed and absurdly naïve.

Call me an optimist, but I’ll vote for the dreamy notion that human drivers will be motivated to drive more judiciously. Of course, the fact that AI-based true self-driving cars will be capturing on video and their other sensors the wacky maneuvers of other nearby human-driven cars, and could decidedly report illegal driving to the cops via the mere push of a button, might provide the “inspiration” needed for better human driving.

Sometimes it takes both the carrot and stick to get human behavior to line up harmoniously.

Source: https://www.forbes.com/sites/lanceeliot/2022/07/30/ai-ethics-confronting-whether-irate-humans-that-violently-smash-or-mistreat-ai-is-alarmingly-immoral-such-as-those-angered-folks-that-lash-out-at-fully-autonomous-ai-systems/