Here’s Why AI Ethics Is Touting That Human-Centered AI Is Crucial To Our AI Symbiotic Existence, Such As The Advent Of Autonomous Self-Driving Cars

You might have heard or seen that the world is supposed to be focusing on Human-Centered AI (HCAI).

What is it? Why do we need it? Is it long-term or just a fad? How does it exactly work? A slew of other similar questions usually arises from those that are not familiar with the nature and substance of Human-Centered AI.

Let’s unpack the matter and set the story straight.

In addition, I’ll provide some helpful examples of how important HCAI is. These examples will be based on the emergence of AI-based autonomous vehicles such as the latest and greatest self-driving cars that are beginning to be placed onto our public roadways.

Get yourself ready for an exciting and informative ride.

The logical way to begin this discussion consists of talking about what prompted the human-centered movement all told. Knowing the yin is essential to understanding the yang, as it were. Please realize that I am not yet going to mention AI until we get established what it means to be human-centered. Put the AI topic out of your mind for a brief moment and we’ll resurface the AI when the time is right.

A purported battle royale has been occurring between what some describe as a tech-centered perspective versus a contrasting human-centered viewpoint when it comes to devising new technology.

Perhaps unbeknownst to you, the ardent belief is that we have been principally preoccupied with technology-centered precepts for far too long. Whenever some new technological innovation was being devised, the attention went toward the techie aspects. Some would argue that we often get caught up in the pell-mell technological race to push out the door the latest in tech, regardless of how it works and how people might react to it.

There are lots of Silicon Valley tropes that fit into this mindset. Fail fast, fail often is one of the typical catchphrases. Another is fake it until you make it. The grandiose idea is that you just jury-rig whatever contraption you can and then start selling it or getting people to use it. Sure, the crazy combobulation might have rough edges. And, yes, people might get confused by the gadget or even get hurt, but anyway being first is considered the high praise of techies and so darn the torpedoes and full steam ahead we go.

The underlying danger of such a tech-first approach is that the human side of the equation gets underplayed or entirely neglected. Rather than wildly tossing new tech into the hands of society, there has been an increasingly vocal call for taking a deep breath and first making sure that the contrivance is compatible with humans. Can people adequately use the device and do so without coming to harm? Is the new-fangled item easy to use or nearly impossible to figure out? And so on.

If we indeed have been ostensibly tech-centered then it is time to shift into a new mode of thinking, namely becoming human-centered. The tech cannot be the sole determiner of admiration. In lieu of the tech being the end-all, the matter of whether humans can leverage the technology and substantively gain from it is the true sign of commendable invention and modernization. A hope was that by shifting the gravitas to human-centered considerations the tech-centered dominant lack of humanity introspection would come around to see the light. I’ve discussed in further depth this change in mindset per my analysis of AI and so-called forbidden knowledge, see the link here.

Of course, in today’s world, we have some techies that are aware of and embrace the human-centered perspective, while others are either unaware of it or believe that it does not apply to them. Thus, there are lots and lots of tech-centered efforts still charging ahead like a bull in a delicate glassware boutique. No doubt about that.

The vexing challenge is that some of the tech-centered efforts will nonetheless be successful, ergo it is hard to somehow entirely erase or disgrace the tech-centered avenue. As a result, some sit squarely in the human-centered camp, and others remain in the tech-centered camp. An enduring claim is that there is more than one way to skin a cat (oops, an outdated adage that needs retiring).

Another twist is that the human-centered camp steadfastly asserts that you do not sacrifice any techie points by embracing a human-centered approach. For techies that dourly imagine they are going down a foul rabbit hole by becoming human-centered, the retort is that this will actually enhance their chances of crafting amazing technology. Think of it in simple terms. The tech that lacks human-centered considerations is probably going to have a tough time getting adopted by people. Tech that embraces and is shaped via human-centered insights is likely to be readily and eagerly adopted by people.

You can have your cake and eat it too.

This emphasizes that perhaps there isn’t a dire rift between being tech-centered and human-centered, in the sense that there is room for co-existence. Do both at the same time, some contend. Blend the two so that they are co-equals and are compatible with each other. Naysayers insist that you must pick one or the other. Either you are tech-centered and put human-centering to the side, or you are human-centered and shunt tech-centered into the doghouse.

In any case, let’s take at face value that we ought to be seriously considering human-related facets when new tech is being devised. And we must be careful to not understate the importance of human-related characteristics when coming up with novel technology. The handy way to capture this is to say that we need to be human-centered. The idea of centering one thing versus another will at times be helpfully alluring and at other times can be lamentedly distracting.

We can next explore how the human-centered philosophy can be enacted in actual real-world practical terms.

Let’s suppose that you are embarking on the design and development of a new system that will include the latest and greatest piece of technology. Those that craft systems are generally nowadays aware that you need to consider the entire life cycle of the design and development process. It used to be that a lot of attention went only to the design, or went only to the building of the system. As a result, we had a lot of systems that ultimately failed when fielded because of disjointed and inadequate handoffs across the numerous interrelated stages of devising a system. You abundantly need to keep your eye on the ball throughout the entirety of the system life cycle.

Various systems life cycle methodologies have been established to guide the entirety of systems efforts. The methodologies are akin to templated cookbooks that showcase what general aspects need to be done during each stage of an effort. You can reapply those to your particular systems effort as befits whatever you are putting together.

We are ready now to connect the dots. There has been a great deal of effort toward creating system life cycle methodologies that are especially keen on the importance of human-centered considerations. This has become known overall as Human-Centered Design (HCD).

In a recent National Institute of Standards and Technology (NIST) document, a quite handy definition or explanation of Human-Centered Design consisted of this passage: “HCD is an ongoing, iterative process in which project teams design, test, and continually refine a system, placing users at the core of the process. Humans and their needs drive the process, rather than having a techno-centric focus. HCD works as part of other development lifecycles, including waterfall, spiral and agile models. User-centered design, HCD, participatory design, co-design, and value-sensitive design all have key similarities; at the highest level, they seek to provide humans with designs that are ultimately beneficial to their lives. Furthermore, by placing humans at the center of such approaches, they naturally lend themselves to a deeper focus on larger societal considerations such as fairness, bias, values, and ethics. HCD works to create more usable products that meet the needs of its users. This, in turn, reduces the risk that the resulting system will under-deliver, pose risks to users, result in user harms, or fail” (per the U.S. Department of Commerce, NIST Special Publication 1270, March 2022, Towards A Standard For Identifying And Managing Bias In Artificial Intelligence by authors Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt, and Patrick Hall).

The NIST report also suitably mentions the International Organization for Standardization (ISO) standard that covers Human-Centered Design (per ISO 9241-210:2019), for which these notable characteristics are seen as foundational:

  • Explicit understanding of users, tasks, and environments (the context of use)
  • Involvement of users throughout design and development
  • Design-driven and refined by human-centered evaluation
  • Iterative process whereby a prototype is designed, tested, and modified
  • Addressing the whole user experience
  • Design teams that include multidisciplinary skills and perspectives

Let’s do a quick recap on what we’ve covered so far.

I trust that you now are cognizant then of this ongoing movement toward human-centered tech and that this movement can be aided via the use of methodologies intentionally crafted for those that want to in fact craft human-centered systems.

Where does AI fit into all of this?

Easy-peasy.

Those that are designing and developing AI often fall into the longstanding trap of being overly preoccupied with the tech per se. You can readily point to many AI systems that were obviously tech-centered and not human-centered (I’ll be sharing some examples about self-driving cars, momentarily).

One naturally supposes that being in the field of AI axiomatically imbues a temptation to be tech-centered. Everybody is rushing headfirst toward trying to make AI into increasingly advanced capacities. Some believe that perhaps we can get contemporary AI into a mode that it will then somehow go into a kind of computational cognitive supernova and spring forth into sentience (see my coverage at this link here).

Just to clarify, we do not yet have any sentient AI. We don’t know if it is possible to produce sentient AI, nor whether sentient AI will miraculously arise. All of today’s AI is non-sentient and not close to being sentient. Want to set the record straight since there are tons of blaring headlines that suggest we either have sentient AI or are on the cusp of having it. Not so.

Back to the focus on human-centered design, the gist is that it makes abundant sense to try and get those crafting AI to come around to the human-centered perspective. Instead of the myopic tech-first AI, it would seem prudent and altogether advantageous to aim for devising human-centered AI. This is the same strident idea that is said to be applied to any kind of tech.

We might as well apply those vaunted human-centered precepts to AI systems.

Doing so has spawned a realm known as Human-Centered AI, often abbreviated as HCAI or sometimes shortened into HAI. Here’s a succinct way to describe HCAI: “Human-centered AI (HCAI) is an emerging area of scholarship that reconceptualizes HCD in the context of AI, providing human-centered AI design metaphors and suggested governance structures to develop reliable, safe, and trustworthy AI systems” (per the NIST report earlier cited).

HCAI is a mashup of AI and HCD.

That might seem somewhat cryptic and excessively bloated with abbreviations. The expanded version would be to say that Human-Centered AI is a mashup of AI and the principles of Human-Centered Design. We want to shift away from a tendency to devise AI-based solely on tech-centered precepts and instead encourage a human-centered preference.

You now know what HCAI is all about. We need to move onward to an erstwhile question.

Can we get AI developers and their company leaders to adopt Human-Centered AI methods?

Maybe yes, maybe no.

Recall that I previously pointed out that not everyone necessarily goes along with the human-centered focus in general. The same can be said about the particular inclusion of human-centered into the AI realm. Some do not abide because they are unaware of what HCAI is. Others take a cursory look at HCAI and decide they’d rather keep doing what they are already doing. You’ve also got cynics that try to denigrate Human-Centered AI as one of those touchy-feely kinds of feel-good approaches. This often appeals to those heads-down full-tech bits-and-bytes types that would prefer to be tech steeped and not deal with what they construe as superfluous matters.

Allow me a moment to identify why Human-Centered AI (HCAI) is gaining strength and becoming a vital part of any demonstrative AI efforts.

You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

My extensive coverage of AI Ethics and Ethical AI can be found at this link here and this link here, just to name a few.

If you perchance were closely following along when I provided the definition or explanation about Human-Centered Design (HCD), you might have keenly observed this sentence in the NIST excerpt: “Furthermore, by placing humans at the center of such approaches, they naturally lend themselves to a deeper focus on larger societal considerations such as fairness, bias, values, and ethics.” Thus, in the case of Human-Centered AI, we can help infuse AI Ethics and Ethical AI into the AI design and development process by earnestly adopting HCAI as a methodology worth doing for AI systems.

You can somewhat convincingly argue that much of the AI For Bad that was unintentional might have occurred due to the AI developers and their company leaders not utilizing an HCAI approach. Had they done so, the odds are heightened that they would have carefully considered the Ethical AI elements and devised the AI to be ethically appropriate. I might add that this judiciously proffers a valuable twofer in that if the AI so devised is seemingly ethically sound, the chances are that the firm and its AI developers will avoid having their AI cross over into acting unlawfully. That’s a quite notable benefit since AI that goes awry is likely to bring lawsuits against the makers and possibly even involve prosecution for criminal acts (see my analysis at the link here of criminal accountability for bad actor AI).

Let’s take a moment to briefly consider some of the key Ethical AI precepts to illustrate what ought to be a vital human-centered or HCAI focus for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. Please be aware that it takes a village to devise and field AI. For which the entire village has to be keeping on their toes about AI Ethics.

To provide you with a taste of what a contemporary Human-Centered AI methodology might include, here are some points made by the NIST report:

  • Define the key terms and concepts related to AI systems and the scope of their intended impact
  • Address the use of sensitive or otherwise potentially risky data
  • Detail standards for experimental design, data quality, and model training
  • Outline how the risks of bias should be mapped and measured, and according to what standards
  • Detail processes for model testing and validation
  • Detail the process of review by legal or risk functions
  • Set forth the periodicity and depth of ongoing auditing and review
  • Outline requirements for change management
  • Detail any plans related to incident response for such
  • Etc.

There are a wide variety of HCAI oriented methodologies available.

Some of them are free and provided on an open-source type of basis, while other such methodologies are licensed or sold to those that wish to use that particular variant. Like a box of chocolates, you need to be mindful of which HCAI methodology you opt to use and what it precisely contains. Some are for example aimed at specific types of AI systems, such as only say for Machine Learning or Deep Learning instances. Also, there are HCAI methodologies that are good for real-time AI systems while other ones are weak in that regard. See my discussion at this link here.

I would urge that you wisely select a hopefully suitable HCAI methodology and make sure that you are astutely adopting it. Do not just dump the HCAI methodology onto unsuspecting AI developers. Do not get one merely so that you can simply checkmark that you have one. There are lots of ways to undercut and poison just about any HACI methodology if you falter in adopting it. Do the right thing, in the right way.

Be cautious too in analysis paralysis of choosing an HCAI methodology. Some AI groups get into turf wars about which HACI methodology is “best” and end up expending an enormous amount of effort fighting over the selection process. Meanwhile, the AI horse is being let out of the barn. Bad move.

At this juncture of this discussion, I’d bet that you are desirous of some examples that might showcase the value of adopting Human-Centered AI (HCAI).

I’m glad you asked.

There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about Human-Centered AI (HCAI), and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And Human-Centered AI (HCAI)

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the notable value of Human-Center AI (HCAI).

Let’s use a readily straightforward example. An AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.

Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.

That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale.

Turns out that a handful of unseemly concerns start to arise about the otherwise innocuous and generally welcomed AI-based self-driving cars, specifically:

1) Pick-up and Drop-off (PUDO) difficulties

2) Insufficient in-vehicle self-driving car status for passengers

3) Dangerous confusion for pedestrians as to AI driving system intentions

4) Worries about privacy intrusions due to self-driving cars

5) Ethical qualms about AI driving as per the infamous Trolley Problem

We’ll take a close look at each of those factors. They each represent some of the most challenging struggles for the ongoing expansion of self-driving car tryouts. To clarify, the biggest challenge is to ensure that the self-driving car safely goes from point A to point B. These aforementioned additional challenges are on top of that core or fundamental set of challenges.

As will become readily apparent, the five stated issues are integrally related to the key notion of human-centered design. In that sense, they showcase the importance of attention to Human-Centered AI (HCAI).

#1: Pick-up and Drop-off (PUDO) Difficulties

When you use a human-driven ridesharing service the one thing that you assume will take place correctly entails getting picked up for the ride at a convenient spot and likewise getting dropped off at the apt destination when the ride is completed (this is known by the handy acronym of PUDO). This can be a lot trickier than it seems. If you are standing at a red curb, should the human driver go ahead and stop there to pick you up? Keep in mind that the law usually states that there is no allowed stopping at a red curb.

Sometimes the “convenient” spot for you to be picked up is not necessarily convenient for the driver of the vehicle. I’m sure that you’ve crossed a street on occasion to try and make it easier for a ridesharing driver to meet you. A type of dance can arise whereby you try to get closer to the car and the driver tries to get closer to wherever you are.

For many of the firms that are crafting AI driving systems, the whole matter of suitable pick-up and drop-off was relegated to a relatively low priority on the list of things to do.

AI developers generally figured a human passenger should always make their way to the self-driving car. It is on the onus of the passenger to do this. The AI driving system doesn’t have to be concerned with the PUDO. Just get near enough and that’s that. In that way of thinking, the pick-up and drop-off locations have nothing to do with matters such as inclement weather (when it rains the PUDO has often adjusted accordingly), or traffic situations (perhaps other cars are zipping past and the potential passenger has to dart into harm’s way), and so on.

Human drivers aren’t usually of that mindset. They realize that they won’t get a monetary tip or might get a foul rating score if they don’t do something earnestly to make the passenger have a relatively convenient PUDO. Trying to get to a useful spot can be a crucial driving skill. This also promotes safety in the sense that if the passenger is anxiously prodded into desperately reaching the self-driving car, some people make unwise choices and can get harmed by all sorts of intervening traffic.

In the case of AI self-driving cars and AI driving systems that don’t do a robust job of PUDO, you could say that this showcases a lack of human-centered focus by the AI developers. They seemingly shrugged off the pick-up and drop-off facets. That’s the kind of mindset that a traditional tech-centered perspective might garner. Make the humans come to the machine, rather than the machine going to the humans.

Problem solved, the techie might gleefully indicate and then wash their hands of the whole affair.

To be fair, this is not the only basis for the lack of a fervent PUDO focus. The energy of AI developers that build AI driving systems has been devoted to making sure that a self-driving car can proceed safely from point A to point B. Without that capability, nothing else pretty much matters. That being said, anyone that gets injured or demonstrably disturbed due to a lousy PUDO of a budding self-driving car effort is likely to make highly publicized noises and the societal reaction could be extremely adverse. There would indubitably be AI techies that would complain that any such backlash is missing the momentous feat of AI driving safely from point A to point B. It would be a clamor that the public is letting the tail wag the dog. Perhaps so, but societal reactions can spell good fortune or outright doom for burgeoning self-driving cars.

For my in-depth coverage of these types of subtle but ultimately make-or-break issues about autonomous vehicles, see my discussion at the link here that provides highlights of a study that I co-authored with a Harvard faculty member.

#2: Insufficient In-vehicle Self-driving Car Status For Passengers

A human driver can be a chatty cat. I mean to say that some ridesharing drivers are continually chattering throughout a ride. Not all passengers relish this. Some want peace and quiet. Others like to have a bit of enjoyable banter.

One aspect of a human driver is that you can almost certainly ask the driver questions about the driving actions being undertaken. Why did the driver take that right turn so quickly? Why is the driver not going faster? You get the idea.

The assumption by some AI developers was that the passenger of a self-driving car would merely get into the autonomous vehicle and then be as quiet as a mouse. No need to have the AI interact with the passenger. Passengers are nothing more than a lump of clay. They sit in the car and they get taken to their destination. End of story.

To the surprise of some automakers and self-driving tech firms, they found out that passengers want to know what is going on. The minimal approach involves having an LED screen that displays the route of the self-driving car. This though doesn’t especially aid in dealing with aspects such as why a quick turn was made or why the car is slowing down. Sure, electronic messages can be displayed but there is little or no interaction allowed between the AI and the passengers.

Once again, you could construe this as a human-centered AI development issue. The assumption that passengers are akin to a suitcase or a delivery box is a tech-centered viewpoint. People are people. If you are going to be transporting people, you have to make them feel comfortable and be responsive to their inquiries. A quick solution for some self-driving car operations is to have an onboard voice connection to a human agent in a remote locale that is monitoring the cars of the fleet. This has numerous downsides, as I’ve at length analyzed at the link here.

#3: Dangerous Confusion For Pedestrians As To AI Driving System Intentions

Imagine that you are nonchalantly walking on the sidewalk and desirous of crossing the street. You notice that a car is coming down the street toward you. The usual thing you would do is look at the human driver. Does the driver see me? Is the driver looking away? Are they watching a cat video and not paying attention to the roadway?

The problem with self-driving cars is that there isn’t a human driver. Well, I guess you could say that’s a plus, allowing us to do away with drunk driving and other errant human driving behaviors. But when it comes to figuring out what a car is going to do, you customarily try to see what the driver is doing (via their head, their eyes, their overall posture, etc.).

When there isn’t a driver in the driver’s seat, how will you know what a car is going to do? You are absent the usual clues. Some automakers and self-driving tech firms did not particularly calculate this problem in their design efforts. In their minds, the AI driving system would always be driving safely and legally. What else needs to be done?

You could argue that this is yet another example of a tech-centered view of the world versus a human-centered perspective. If you believe that humans will all simply give self-driving cars the right-of-way at all times, perhaps it doesn’t matter that the directional action cues of a human driver are needed. On the other hand, in the real world, something must be provided.

As I’ve examined at the link here, numerous proposals are being tried out such as putting special lightbulbs on the hood of self-driving cars that light up and kind of point in the direction of where the autonomous vehicle is heading. Another notion is the specialized devices will acknowledge to a pedestrian that they are seen by the AI driving system. All manner of sensible options, including at times quirky approaches are being explored.

#4: Worries About Privacy Intrusions Due To Self-driving Cars

I’ve repeatedly been warning about the massive potential of privacy intrusions due to the advent of self-driving cars. Nobody is giving much attention to the matter at this time because it hasn’t yet risen as a realized problem. I assure you that it will ultimately be a humongous problem, one that I’ve labeled as the veritably “roving eye” of self-driving cars (see the link here).

Your first thought might be that this is assuredly solely about the privacy of passengers. The AI could monitor what riders are doing while inside a self-driving car. Sure, that’s a definite concern. But I’d argue it pales in comparison to the less obvious qualm that has even larger ramifications.

Simply stated, the sensors of the self-driving car are collecting all kinds of data as the autonomous vehicles roam throughout our public roadways. It captures what is happening on the front lawns of our houses. It captures what people are doing when walking from place to place. It shows when you entered an establishment and when you left it. This data can be easily uploaded into the cloud of the fleet operator. The fleet operator could then connect together this data and essentially have the details of our day-to-day lives that occur while outdoors and throughout our community.

We don’t seem to care about that privacy intrusion potential right now because there are only handfuls of self-driving cars being tried out in relatively few locales. Someday, we’ll have hundreds of thousands or millions of self-driving cars that are crisscrossing all of our major cities and towns. Our daily outdoor lives will be recorded non-stop, 24×7 and we can be tracked accordingly.

A tech-centered AI developer would likely downplay this. They would suggest that all they are doing is implementing the AI technology. It is someone else that needs to be worrying about how the technology is going to be used.

In contrast, a human-centered perspective would already be handwringing about how this gigantic privacy-ticking timebomb is going to be dealt with. Better sooner than later. It is yet another good reason to have Human-Centered AI (HCAI) in the mix of things.

#5: Ethical Qualms About AI Driving As Per The Infamous Trolley Problem

This last point is a bit complex. I’ll shorten it and encourage you to consider perusing my lengthy analysis at the link here. In short, when you are driving a car or riding simply as a passenger in a car, you are always in danger as to what happens to the car. The car might get hit by another car. The car might plow into a lamppost. All sorts of bad things can occur.

How does a human driver decide what driving actions to take?

For example, my favorite example was a case reported in the news. A human driver was legally going into an intersection when another car ran a red light and was perilously coming toward the legally allowed car. The driver told reporters that he had to make a tough decision. He could keep going and likely get rammed by the intruding car, possibly getting killed in the process or certainly getting badly injured. Or he could steer his car away from the anticipated collision, but there were nearby pedestrians. The odds were high that he might hit or kill some of the pedestrians. Doing so would though likely spare his life.

What would you do?

The thing is, we face these driving dilemmas all the time. We don’t especially think about them until things go badly. I assure you that every driver at all times is mentally embroiled in having to decide which driving actions to take. Their life depends on it. The lives of the passengers in their vehicle depend upon it. The lives of other drivers and passengers in nearby cars depend upon it. Pedestrians, bicyclists, and others depend upon it too.

I bring this up because of a simple but extremely difficult question to answer.

What should AI do when confronted with life-or-death driving decisions?

If you were a passenger in a self-driving car that was in the midst of an intersection and a human-driven car ran the red light, what do you want the AI driving system to do? Should the AI try to avoid the hit and yet possibly mow into a group of pedestrians? Or should the AI not take any special action and just allow the autonomous vehicle to get rammed? These kinds of ethical dilemmas are often depicted in the context of a famous thought experiment known as the Trolley Problem (see the link here).

Currently, not many are taking the Trolley Problem with any sober seriousness. By and large, most AI driving systems do not yet try to evaluate evasive driving actions. Even if the AI does so, there is an open and agonizing question of how the AI is supposed to be programmed to make these challenging ethical decisions.

A tech-centered perspective would tend to say do not worry about it until it becomes a societally aware problem. Just keep churning out masses of AI code and see what happens down the line.

A human-centered viewpoint would likely assert that we need to be getting on top of this matter. This is not a matter though that can be decided by each automaker and self-driving tech firm alone. A larger societal focus is needed for an across-the-board means of figuring this out.

Conclusion

Diehard AI developers are often skeptical of what they consider to be fads or fluffy nonsense. I’ve had many AI techies that have asked me a strictly harsh and starkly pointed question in a haughty and doubtful tone about this rising interest in Human-Centered AI (HCAI).

They ask this: Where’s the beef in HCAI?

My answer is that the meat and the potatoes and the dessert are all to be found in the realization that a human-centered approach to AI will give us a heightened chance at having AI be a success and become a ubiquitous contribution to society at large. Those that wish instead to lean into and stay mired in a tech-centered only approach are likely to find that the AI they create is wholly problematic and society is going to get really riled up about it. Heads will roll. AI might get rejected as a technological wonderment and become a villainous outcast.

I vociferously vote for some human-centered thought toward being human-centered when it comes to AI.

Source: https://www.forbes.com/sites/lanceeliot/2022/04/01/heres-why-ai-ethics-is-touting-that-human-centered-ai-is-crucial-to-our-ai-symbiotic-existence-such-as-the-advent-of-autonomous-self-driving-cars/