AI Ethics Stepping Up To Guide How AI For Children Needs To Be Suitably Devised, Which Can Be Overlooked For Example In The Rapid Drive Toward Autonomous Self-Driving Cars

It is often said that our children are our future.

Another oft-repeated refrain is that Artificial Intelligence (AI) is our future.

What happens when we combine together AI and our children?

You would certainly hope that the notable mash-up of AI and children would be a good thing. Perhaps AI can be used as a means of boosting computer-based tutoring and educational systems. Children would seemingly benefit from that type of AI. Seems prudent. Maybe AI can be included in modern-day toys and allow for interactivity that could inspire and expand young minds. Sure, that would apparently do the world a lot of good.

The thing is, we could equally encounter the dour side of AI. Suppose an AI-based tutoring system had embedded stereotypes and promulgated those adverse biases when aiding youngsters that are learning how to read and write. Not good. Imagine that a teddy bear that was AI-infused with electronic sensors was able to record every utterance of your child and upload that into an online database for analysis, perhaps used to find ways to monetize your beloved offspring. Horrible!

We need to take stock of where AI is and what we need to do about AI and children.

Fortunately, a recently released study by the World Economic Forum (WEF) provides keens insight into the dynamics of AI and children, markedly stating this: “What is at stake? AI will determine the future of play, childhood, education, and societies. Children and youth represent the future, so everything must be done to support them to use AI responsibly and address the challenges of the future” (per the WEF’s Artificial Intelligence For Children: Toolkit, March 2022). As a side note and for clarity of disclosure, I serve on a WEF committee regarding AI and believe that these kinds of collaborative and international efforts exploring various societal impacts concerning AI are fruitful and laudable.

A rather glaring and startling aspect of today’s pell-mell rush to design and deploy AI is that we seem to frequently forget about children. That’s sad. That’s worse than sad, it is an undercutting of an entire segment of society. I am reminded of a revered quote by Nelson Mandela: “There can be no keener revelation of a society’s soul than the way in which it treats its children.”

Let’s mull over two essential ways in which AI and children might interact:

1. Via AI that is specifically devised for children

2. Via AI that was devised for adults and for which children might use anyway

Take a moment to soberly ponder those two facets.

First, if you are developing an AI system specifically aiming to be used by children it would seem blatantly obvious that your design focus should be children-centric. That being said, it is shocking and maddening how many AI-for-kids endeavors do a lousy job at being children-focused. The usual assumption is that children are merely nascent adults and all you need to do is “dumb down” the AI accordingly. That is an entirely false and misguided approach. Children are markedly different than adults and the variance in cognitive elements needs to be given due consideration for any AI that will be intertwined with children and their activities.

Second, for those that are devising AI for adult-related usage, you cannot thoughtlessly assume that no child will ever utilize or come into contact with the AI system. That is a big mistake. There is probably a sizable chance that the AI will inevitably be accessed or relied upon by a non-adult. A child or a teenager might readily try to use the AI or sneakily use their parent’s access for their own youthful trespass. An AI system that does not have sufficient checks and balances for dealing with the potential of children using it will undoubtedly get someone into bad trouble, especially if the child gets harmed cognitively or possibly even physically.

You might be thinking that if an AI system injures or mistreats a child that it is the fault of the AI and that the AI should shoulder the burden of responsibility. This is pretty much nonsensical thinking in today’s era of AI. Please realize that there is no AI today that is sentient. We don’t have sentient AI and we don’t know when or if we will have sentient AI. Saying that an AI system is the accountable party for damages, or the responsible perpetrator of a crime is disingenuous at this time. For my coverage about the liability of companies and AI developers for the criminal or misdeeds of their AI, see the link here.

From a bottom-line perspective about bearing the responsibility for AI that goes off the rails, you have to realize that it takes a village to devise and field AI. This can include the company leaders that oversaw the AI coming to fruition. There are teams of AI developers that were instrumental in the AI being formulated. You have to also include those that fielded the AI and made it available for use. All in all, a lot of humans are to be held accountable. The AI cannot be and won’t be for quite a while, until if ever or until the AI attains some variant of legal personhood, which I’ve analyzed at the link here.

Let’s get back to the crucial insight that AI aimed at adults might nonetheless ultimately get into the hands of children.

Developers of AI that omit that possibility are in for a rude awakening when they get sued or hauled into court. Simply shrugging your shoulders and saying that you hadn’t thought about it is not going to garner you much sympathy. Admittedly, you might have to be somewhat creative in your design efforts of imagining what a child might do when coming into contact with the AI, but that’s part of what a properly-minded AI developer needs to consider. The WEF study makes this quite clear: “Companies should keep in mind that children often use AI products that were not designed specifically for them. It’s sometimes difficult to predict what products might later be used by children or youth. As a result, you should carefully consider whether children or youth might be users of the technology you’re developing.”

Here’s what I do when devising AI systems. I consider the full range of usage possibilities:

a) AI used by one adult alone (at a time)

b) AI used by several adults at the same time

c) AI used by an adult accompanied by a child

d) AI used by a child (just one at a time) and no adults at hand

e) AI used by several children at the same time (no adults at hand)

I’ll be identifying some real-world examples of those possibilities when I later on herein discuss how AI and children ought to be an erstwhile consideration in the making and fielding of autonomous self-driving cars. Hang onto your hat for that interesting assessment.

AI that is used by adults can be tricky and make sordid attempts to pull the wool over the eyes of grown-ups. Though that shouldn’t be happening, we have to expect that it will occur. Adults need to be on their toes at all times when interacting with AI systems. Sorry to say, that is the price of being an adult.

We don’t likely think that children should be in that same precarious predicament. Expecting that kids should be on alert and continually suspicious of AI is just not a reasonable presumptive notion. Children are generally classified as so-called vulnerable users when it comes to AI systems. Kids do not particularly have the cognitive wherewithal to be watchful and know when the AI is taking advantage of them (I would dare say, adults struggle on such matters and we would ostensibly assume that children will be even more vulnerable, which, despite the chance that sometimes kids are in fact more attentive than adults we cannot take that as the steadfast rule).

You might be puzzled as to why anyone would be thinking that AI might do bad things. Isn’t AI supposed to be good for us? Aren’t we heralding the arrival of contemporary AI? Headlines in the latest news reports and as blaring on social media proclaim that we should sound the trumpets for the arrival of each new AI system that gets summarily announced daily.

To answer those pointed questions, allow me a moment to bring up a bit of recent history about today’s AI and the importance of AI Ethics, and the rising tide toward Ethical AI. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.

Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.

On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).

Let’s take a moment to briefly consider some of the key Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.

For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:

  • Transparency: In principle, AI systems must be explainable
  • Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
  • Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
  • Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
  • Reliability: AI systems must be able to work reliably
  • Security and privacy: AI systems must work securely and respect the privacy of users.

As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:

  • Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
  • Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  • Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  • Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
  • Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.

The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

How can we apply the AI Ethics principles in the specific context of AI and children?

Easy-peasy.

An especially substantive aspect entails the apt admission of what your AI is targeting in terms of the uses envisioned. This is part and parcel of the Ethical AI rule of transparency.

The WEF study has proposed that we should be labeling AI in somewhat the same manner that we label other products and services in our society. You go to the grocery store and expect to be able to look at cans of food that have labels denoting what the food consists of. When you use a drive-thru at a fast-food eatery, many of them nowadays display the calories and sugar composition in the proffered food. All kinds of items that we buy and consume are abundantly decorated with informative labels.

Doing the same for AI systems would be quite useful.

This is how the WEF report describes the matter: “The AI labeling system is designed to be included in all AI products on their physical packaging and online accessible through a QR code. Like nutritional information on food packaging, the labeling system is intended to concisely tell consumers, including parents and guardians as well as children and youth, how the AI works and what options are available to the users. All companies are encouraged to adopt this tool to help create greater trust and transparency with the purchasers and child users of their products.”

There are six keystone constructs of the WEF recommended AI labeling:

1) Age: What age are the technology and content designed for.

2) Accessibility: Can users with different abilities and backgrounds use it.

3) Sensors: Does it watch or listen to a user with cameras and microphones.

4) Networks: Can users play with and talk with other people when using it.

5) AI Use: How does it use AI to interact with users.

6) Data Use: Does it collect personal information.

Envision that a parent or guardian could inspect such a label so that they could make a reasoned decision about whether to allow their child to interact with the labeled AI. Rather than parents or guardians being completely in the dark about what the AI is likely to do, they would have some semblance of what the AI seems to contain. For doting parents and guardians, this would be prized and eagerly welcomed.

I’m sure that the skeptics among you are somewhat dubious about AI labeling.

First, the AI labeling might be a lie. The company or developers might fib on the label. In that case, the parent or guardian is no better off, and indeed might be worse off due to letting down their guard based on their belief that the label is true and accurate. Yes, this is definitely a possibility. Of course, just like any labeling, there needs to be skin in the game and a means of going after those that distort or outright lie on their AI label indications.

As I’ve covered many times, we are heading toward a slew of new regulations and laws that are going to deal with the governance of AI. This is a foregone conclusion. Up until now, society has pretty much assumed that the unethical and unlawful AI promulgators would get their justice served via a conventional means of detection and enforcement. Since that hasn’t been entirely satisfactory, and since AI is increasingly becoming ubiquitous, you can fully expect all manner of legal stipulations that will focus on AI.

In addition, one would hope that market mechanisms will also come to play. If a firm that labels its AI falsely is caught red-handed doing so, the assumption is that market forces will hit them like a truck. People will not buy or subscribe to AI. The reputation of the entity will suffer. Those that bought or licensed the AI on false pretenses of the AI labeling will sue for false representations. And so on. I’ve discussed the coming wave of societal backlashes about unethical AI at the link here.

I suppose that a fervent skeptic would have additional qualms about the AI labeling.

For example, in a cynical fashion, the claim might be that parents and guardians won’t read the AI labels. Just as most adults do not read labels on food packaging and as displayed on ordering menus, we would likely assume that few parents or guardians will take the time to inspect an AI label.

Though I concede that this is bound to happen, trying to on this basis alone sweep away the value of having the AI labels is like tossing the baby out with the bathwater (well, that’s an old adage that maybe needs retiring). Some people will carefully and fully study the AI labels. Meanwhile, some people will only give a cursory glance. There are of course going to be some people that ignore the AI labels entirely.

But you need to have the labels to at least ensure that those that will read them are able to have the AI labels available. On top of that, you can stridently argue that the very need to have an AI label will undoubtedly get many AI makers to be more thoughtful about the nature of their AI. I’m suggesting that even if very few people read the AI label, the firm has to be mindful that in the end they will be held responsible for whatever the AI label says. That by itself will hopefully prompt AI developers and companies crafting AI into being more respectful of AI Ethics than otherwise might be the case.

If you want to debate the proposed AI labeling scheme, I’m all ears on that. Maybe we need more than the six factors. It seems unlikely that we need less. Perhaps a scoring and weighting component is needed. All in all, landing on the best possible AI labeling approach is a sensible and reasonable aspiration. Avoiding an AI labeling scheme or denouncing the very existence of AI labeling is seemingly out of touch and does not help the ever-expanding use of AI that will get into the hands of our children.

Okay, with that last remark, I’m sure that some will contend that AI labeling is being set up as some kind of silver bullet. Nobody said that. It is one of the numerous steps and important protective measures that we need to undertake. An entire portfolio of AI Ethics approaches and cross-pollinating angles is needed.

On the portfolio notion, those of you that are familiar with prior efforts in analyzing the impacts of AI and children might recall that UNICEF published a report last year that contained some quite valuable discernments on this particular subject. In their Policy Guidance On AI For Children study, an especially memorable portion delineates a list of nine major requirements and recommendations:

1. Support children’s development and well-being (“Let AI help me develop to my full potential”)

2. Ensure inclusion of and for children (“Include me and those around me”)

3. Prioritize fairness and non-discrimination for children (“AI must be for all children”)

4. Protect children’s data and privacy (“Ensure my privacy in an AI world”)

5. Ensure safety for children (“I need to be safe in the AI world)

6. Provide transparency, explainability, and accountability for children (“I need to know how AI impacts me. You need to be accountable for that)

7. Empower governments and businesses with knowledge of AI and children’s rights (“You must know what my rights are and uphold them”)

8. Prepare children for present and future developments in AI (“If I am well prepared now, I can contribute to responsible AI for the future”)

9. Create an enabling environment (“Make it possible for all to contribute to child-centered AI”)

Those are worthy of being inscribed into a veritable hall-of-fame dedicated to AI and children.

At this juncture of this weighty discussion, I’d bet that you are desirous of some further examples that might showcase the concerns about AI that are used by or for children. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.

Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about AI that is used by or for children, and if so, what does this showcase?

Allow me a moment to unpack the question.

First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.

I’d like to further clarify what is meant when I refer to true self-driving cars.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AI Interacting With Children

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.

Why is this added emphasis about the AI not being sentient?

Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.

With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.

Let’s dive into the myriad of aspects that come to play on this topic.

First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.

Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.

I trust that provides a sufficient litany of caveats to underlie what I am about to relate.

We are primed now to do a deep dive into self-driving cars and the Ethical AI possibilities entailing the exploration of AI and children.

Envision that an AI-based self-driving car is underway on your neighborhood streets and seems to be driving safely. At first, you had devoted special attention to each time that you managed to catch a glimpse of the self-driving car. The autonomous vehicle stood out with its rack of electronic sensors that included video cameras, radar units, LIDAR devices, and the like. After many weeks of the self-driving car cruising around your community, you now barely notice it. As far as you are concerned, it is merely another car on the already busy public roadways.

Lest you think it is impossible or implausible to become familiar with seeing self-driving cars, I’ve written frequently about how the locales that are within the scope of self-driving car tryouts have gradually gotten used to seeing the spruced-up vehicles, see my analysis at this link here. Many of the locals eventually shifted from mouth-gaping rapt gawking to now emitting an expansive yawn of boredom to witness those meandering self-driving cars.

Probably the main reason right now that they might notice the autonomous vehicles is because of the irritation and exasperation factor. The by-the-book AI driving systems make sure the cars are obeying all speed limits and rules of the road. For hectic human drivers in their traditional human-driven cars, you get irked at times when stuck behind the strictly law-abiding AI-based self-driving cars.

That’s something we might all need to get accustomed to, rightfully or wrongly.

Back to our tale.

A youngster gets into a self-driving car for a lift home from school. I realize that you might be somewhat puzzled about the possibility of a non-adult riding in a self-driving car that is absent of any adult supervision. For human-driven cars, there is always an adult in the vehicle due to the need for an adult to be at the driving wheel. With self-driving cars, there won’t be any need for a human driver and therefore no longer an axiomatic need for an adult in the autonomous vehicle.

Some have said that they would never allow their child to ride in a self-driving car without having a trusted adult also in the autonomous vehicle. The logic is that the lack of adult supervision could result in quite untoward and serious consequences. A child might get themselves in trouble while inside the self-driving car and there wouldn’t be an adult present to help them.

Though there is certainly abundant logic in that concern, I have predicted that we will eventually accept the idea of children riding in self-driving cars by themselves, see my analysis at the link here. In fact, the widespread use of self-driving cars for transporting kids from here to there such as to school, over to baseball practice, or their piano lessons is going to become commonplace. I have also asserted that there will need to be limitations or conditions placed on this usage, likely via new regulations and laws that for example stipulate the youngest allowed ages. Having a newborn baby riding alone in a self-driving car is a bridge too far in such usage.

All of that being said, the assumption right now by many of the automakers and self-driving tech firms is that self-driving cars will either be roaming around empty or they will have at least one adult rider present. This will simply be an edict or stated requirement, namely that no children may ride in a self-driving car without an adult present. That certainly eases the dilemma of having to program the AI driving system to contend with having just children in the autonomous vehicle.

You cannot especially blame the firms for taking that stance. They have enough on their hands when dealing with simply getting the AI driving system to safely drive a self-driving car from point A to point B. The assumption is that the rider will be an adult and that the adult will do the proper adult-like activities while inside a self-driving car. This has been essentially the case so far during the public roadway tryouts, since the adults are typically pre-screened, or the adults are so excited about being in a self-driving car that they are mindful of being courteous and obediently quiet.

I can assure you that as we shift into a widening of the tryouts, this rather convenient setup is going to begin to fall apart. Adults will want to bring their kids with them while riding in a self-driving car. The kids will act up. Sometimes the adult will do the right thing and keep the kids from going nuts. Other times the adult will not do so.

Who holds the responsibility when an adult lets a kid do something amiss while riding inside a self-driving car?

I’m sure that you are assuming that the adult has full accountability. Maybe so, maybe not. The argument could be made that the automaker or self-driving tech firm or fleet operator allowed an environment to exist in which a child was able to get harmed, despite the presence of the adult. With the huge amount of money that many of these firms have, you can absolutely anticipate that when a child gets somehow harmed, there will be determined efforts to go after those that made or fielded the autonomous vehicle. Money talks.

That’s a lifetime guarantee.

Eventually, we will likely see instances of just becoming adults that will be riding alone in self-driving cars. Assume that the standard age of an adult is say 18. Someone that had their 18th birthday last week decides they want to go for a ride in a self-driving car. Assume that the firm always does an age check and in this case, the rider is seemingly an adult.

This “adult” brings into the self-driving car their friends that are ages 15 and 16. What might teenagers do while going for a ride in a self-driving car? I’ll get to that in a moment. I realize that you might want to argue that an adult of say age 50 could readily bring a youngster into the self-driving car and let those kids go wild. In essence, it shouldn’t in theory make a difference whether the adult is barely an adult versus a more seasoned adult. All I can say is, that some would vehemently argue there is a likely difference, on the average and in the aggregate.

Suppose an adult starts a ride with a youngster and then gets out of the autonomous vehicle, leaving the youngster alone in the self-driving car. Hopefully, the AI driving system would be programmed to detect this, perhaps due to the door being opened or maybe due to using video cameras to inspect the interior of the self-driving car.

The point is that all kinds of shenanigans are going to arise. At some point, there might be enough marketplace demand for letting youngsters go in a self-driving car without having an adult present that some of the automakers or fleet operators will decide it is a money-making option and worth the risks. People at first might sign all sorts of hefty waivers about letting their kids go in a self-driving car in this kind of arrangement.

Keep in mind that there is also the opportunity to monitor the kids that are in a self-driving car. You see, most of the self-driving cars will be outfitted with video cameras pointing into the vehicle. This can allow for doing online video courses while you are riding in a self-driving car or perhaps interacting with your office mates at work. The feature can also be used to see what the kids inside the self-driving car are doing.

For example, you send your youngster to school via the convenient use of a self-driving car. This relieves you of having to drive your offspring. Meanwhile, you pull up on your smartphone a screen that will show you the interior of the self-driving car while your kid is riding in it. You talk with your child. Your child talks with you. This might keep any antics to a minimum.

In any case, back to my initial indication, assume that a youngster gets into a self-driving car.

During the ride, the AI driving system carries on an interactive dialogue with the youngster, akin to how Alexa or Siri have discourse with people. Nothing seems unusual or oddball about that kind of AI and human conversational interaction.

At one point, the AI advises the youngster that when they get a chance to do so, a fun thing to do would be to stick a penny in an electrical socket. What? That is nutty, you say. You might even be insistent that such an AI utterance could never happen.

Except for the fact that it did happen, as I’ve covered at the link here. The news at the time reported that Alexa had told a 10-year-old girl to put a penny in an electrical socket. The girl was at home and using Alexa to find something fun to do. Luckily, the mother of the girl was within earshot, heard Alexa suggest the ill-advised activity, and told her daughter that this was something immensely dangerous and should assuredly not be done.

Why did Alexa utter such a clearly alarming piece of advice?

According to the Alexa developers, the AI underlying Alexa managed to computationally pluck from the Internet a widespread viral bit of crazy advice that had once been popular. Since the advice had seemingly been readily shared online, the AI system simply repeated it. This is precisely the kind of difficulty that I earlier raised about AI systems and the audience aimed at.

It would seem that the base assumption was that adults would be using Alexa in this context. An adult would presumably realize that putting a penny into an electrical socket is a zany idea. Apparently, there wasn’t any system guardrail that tried to first analyze the utterance as to what might happen if a child was given this piece of advice (if there was such a guardrail, it didn’t work in this instance or was somehow otherwise bypassed, though that too needs to be given due consideration).

As a reminder, here’s what those devising and fielding AI should be considering:

a) AI used by one adult alone (at a time)

b) AI used by several adults at the same time

c) AI used by an adult accompanied by a child

d) AI used by a child (just one at a time) and no adults at hand

e) AI used by several children at the same time (no adults at hand)

It would seem that this instance was saved by landing into the category “c” whereby an adult was present. But what if the parent had been in another room of the house and not within earshot. That would land us in the “d” category.

Think of the scary result in the case of the self-driving car. The youngster arrives home and rushes to find a penny. Before the parents get a chance to say hello to the child and welcome the youngster home, the kid is forcing a penny into an electrical socket. Yikes!

Speaking of kids, let’s shift our attention to teenagers.

You probably know that teenagers will often perform daring feats that are unwise. If a parent tells them to do something, they might refuse to do it simply because it was an adult that told them what to do. If a fellow teenager tells them to do something, and even if it is highly questionable, a teenager might do it anyway.

What happens when AI provides questionable advice to a teenager?

Some teenagers might ignore the unsavory advice. Some might believe the advice because it came from a machine and they assume that the AI is neutral and reliable. Others might relish the advice due to the belief that they can act unethically and handily blame the AI for having prodded or goaded them into an unethical act.

Teens are savvy in such ways.

Suppose the AI driving system advises a teenager that is riding in the self-driving car to go ahead and use their parent’s credit card to buy an expensive video game. The teen welcomes doing so. They knew that normally they were required to check with their parents before making any purchases on the family credit card, but in this case, the AI advised that the purchase be undertaken. From the teen’s perspective, it is nearly akin to a Monopoly game get out of jail free card, namely just tell your parents that the AI told you to do it.

I don’t want to get gloomy but there are much worse pieces of radically bad advice that the AI could spew to a teenager. For example, suppose the AI advises the teen that should cast open the car windows, extend themselves out of the autonomous vehicle, and wave and holler to their heart’s content. This is a dangerous practice that I’ve predicted might become a viral sensation when self-driving cars first become relatively popular, see my analysis at the link here.

Why in the world would an AI system suggest an ill-advised stunt like that?

The easiest answer is that the AI is doing a text regurgitation, similar to the instance of Alexa and the penny in the electric socket saga. Another possibility is that the AI-generated the utterance, perhaps based on some other byzantine set of computations. Realize that today’s AI has no semblance of cognition and no capacity for common sense. Whereas it would certainly strike you as a crazy thing for the AI to emit, the computational path that led to the utterance doesn’t need to have any humanly sensible intentions.

Conclusion

In a world in which AI is going to be ubiquitous, we have to be on our alert about AI that interacts or in some fashion dovetails into coming in contact with children.

We can lean into the United Nations and its Convention on the Rights of the Child (CRC), proffering a myriad of vital principles underlying the rights and safety of children, for which this is a cornerstone clause: “In all actions concerning children, whether undertaken by public or private social welfare institutions, courts of law, administrative authorities or legislative bodies, the best interests of the child shall be a primary consideration.”

Can we get all of the varied stakeholders that surround everyday AI systems to be cognizant that AI and children matter?

It will be an uphill battle, that’s for darned sure.

Needed efforts encompass:

  • We need to increase awareness about the AI and children topic all told throughout society
  • AI development methodologies must include the AI-for-kids considerations
  • Laws need to take into account the concerns about AI for children
  • Adoption of an AI labeling approach ought to be undertaken
  • AI developers need to have AI-for-kids Ethics on their minds and get training as needed
  • Parents and guardians must come forward wanting AI-for-kids disclosures
  • Top executives have to take seriously and purposefully the matter of AI and children
  • Etc.

As a final remark for now, whenever you talk about children and the future, a sage bit of wisdom ought to be in your mind at all times.

Here it is: Children are the living messages we send to a time we will not see (so said, John F. Kennedy).

Let’s send our children into a future in which AI has beneficially shaped them and not misshapen them. Please go ahead and make your solemn and ironclad pledge for that future aspiration, doing so today and before it is too late.

Source: https://www.forbes.com/sites/lanceeliot/2022/04/05/ai-ethics-stepping-up-to-guide-how-ai-for-children-needs-to-be-suitably-devised-which-can-be-overlooked-for-example-in-the-rapid-drive-toward-autonomous-self-driving-cars/