AI Ethics Leans Into Aristotle To Examine Whether Humans Might Opt To Enslave AI Amidst The Advent Of Fully Autonomous Systems

Friend or foe.

Fish or fowl.

Person or thing.

These pervasive conundrums all seemingly suggest that we at times are faced with a dichotomous situation and need to choose one facet or the other. Life might force us to contend with circumstances that consist of two mutually exclusive options. In more flavorful language, you could suggest that an exclusionary binary equation requires us to starkly go marching down one distinct path rather than another.

Let’s focus specifically on the person-or-thing dichotomy.

The fervent question of person-or-thing comes up time and again concerning Artificial Intelligence (AI).

To clarify, today’s AI is absolutely not a person and carries no semblance of sentience, despite whatever wide-eyed and entirely outsized headlines you might be seeing in the news and across social media. Thus, you can firmly rest assured that right now the matter of whether AI is a person or a thing is readily answerable. Read my lips, in a veritable Hobson’s choice between person or thing, AI is a thing, for now.

That being said, we can look toward the future and wonder what might occur if we are able to attain a sentient form of AI.

Some reasoned critics (plus caustic skeptics) suggest that we are perhaps counting our chickens long before they are hatched by nowadays discussing the ramifications of a sentient AI. The expressed concern is that the discussion itself implies that we must be on the cusp of such AI. Society at large could be misled into believing that tomorrow or the day after there will be a sudden and shocking revelation that we have in fact arrived at sentient AI (this is at times referred to by some as the AI singularity or an intelligence explosion, see my analysis at the link here). Meanwhile, we don’t know when such AI will arise, if ever, and certainly, we seemingly don’t need to be looking around each corner and be terrifyingly on edge that the sentient AI will jump out at us entirely unexpectedly in the next little while.

The other side of the debate points out that we ought to not have our heads buried deeply in the sand. You see, if we aren’t overtly discussing and pondering the possibilities associated with sentient AI, we are doing humanity a presumed grave disservice. We won’t be ready for handling sentient AI when or if it does arise. Furthermore, and perhaps even more powerfully stated, by anticipating sentient AI we can take matters somewhat into our own hands and shape the direction and nature of how such AI will come to be and what it shall consist of (not everyone agrees on this latter point, namely some say that such AI will have a “mind” entirely of its own and we will be unable to shape or corral it since the AI will be independently able to think and determine a means to persistently exist).

AI Ethics tends to side with the viewpoint that we would be wise to get these arduous and argumentative sentient-AI matters out in the open now, rather than waiting around until we have no options left or get gobsmacked upon the attainment of such AI. Readers well-know that I’ve been covering AI Ethics and Ethical AI topics extensively, including covering a robust range of thorny issues such as AI legal personhood, AI containment, AI disgorgement, AI algorithmic monoculture, AI ethics-washing, dual-use AI so-called Doctor Evil projects, AI hiding societal power dynamics, trustworthy AI, auditing of AI, and so on (see my columns coverage of these vital topics at the link here).

I put to you a challenging question.

In the future, assuming we in whatever fashion end up with sentient AI, will that sentient AI be construed by us all as a person or as a thing?

Before we start to do a deep dive into this altogether provocative question, allow me to say something about the catchphrase of “sentient AI” so that we are all on the same page. There is a lot of angst about the meaning of sentience and the meaning of consciousness. Experts can readily disagree on what those words constitute. Adding to that muddiness, whenever anyone refers to “AI” you have no ready means of knowing what they are referring to per se. I’ve already herein emphasized that today’s AI is not sentient. If we ultimately arrive at a future AI that is sentient, we presumably will call it “AI” too. The thing is, these contentious matters can be pretty darned confusing right now as to whether the utterance of the “AI” phrasing is related to the non-sentient AI of today or the someday maybe sentient AI.

Those debating AI can find themselves talking past each other and not realize that one is describing apples and the other is meanwhile speaking of oranges.

To try and get around this confusion, there is an adjustment to the AI phrasing that many are using for purposes of hopeful clarification. We currently tend to refer to Artificial General Intelligence (AGI) as the type of AI that can do fully intelligent-like efforts. In that sense, the blander use of the phrase “AI” is left to either be interpreted as a lesser version of AI, which some say is narrow-AI, or is denotationally ambiguous and you don’t know whether the reference is to non-sentient AI or the maybe sentient AI.

I’ll provide an added twist to this.

Depending upon a given definition of sentience, you could get into a heated discourse over whether AGI will be sentient or not. Some assert that yes, of course, AGI will by its intrinsic nature need to be sentient. Others claim that you can have AGI that is not sentient, ergo, sentience is a different characteristic that is not a requirement for attaining AGI. I have variously examined this debate in my columns and will not rehash the matter herein.

For the moment, please assume that henceforth in this herein discussion that when I refer to AI that I am intending to suggest that I am referring to AGI.

Here’s the download on this. We don’t have AGI as yet, and in a manner of speaking, we’ll momentarily politely agree that AGI is in the same overall camp as sentient AI. If I were to use “AGI” solely throughout my discussion, this phrasing is potentially distracting since not many are yet accustomed to seeing “AGI” as a moniker and they would be likely mildly irked at repeatedly seeing this relatively newer phrasing. Now then, if instead, I was to keep referring to “sentient AI” this might be a distractor too for those that are fighting over whether AGI and sentient AI are the same or different from each other.

To avoid that mess, assume that my referring to AI is the same as saying AGI or even sentient-AI, and at least know that I am not speaking of today’s non-sentient non-AGI AI when I get into the throes of considerations regarding AI that appears to have human-like intelligence. I will occasionally use the AGI namesake to remind you from time to time herein that I am examining the type of AI that we don’t yet have, especially at the start of this exploration on the person-or-thing riddle.

That was a useful fine print acknowledgment and I now return to the foundational matter at hand.

Allow me to now ask you whether AGI is a person or a thing.

Consider these two questions:

  • Is AGI a person?
  • Is AGI a thing?

Let’s next proceed to repeat each question, and respectively answer the questions with a series of yes or no answers as befitting a presumed dichotomous choice.

Begin with this postulated possibility:

  • Is AGI a person? Answer: Yes.
  • Is AGI a thing? Answer: No.

Mull that over. If AGI is in fact construed as a person and not as a thing, we can almost assuredly agree that we should treat the AGI as though it is akin to a person. There would seem to be an insufficiently genuine argument about failing to grant the AGI a form of legal personhood. This either would be entirely the same as human legal personhood, or we might decide to come up with a variant of human-oriented legal personhood that would be judiciously more applicable to the AGI. Case closed.

That was easy-peasy.

Imagine instead that we declared this:

  • Is AGI a person? Answer: No.
  • Is AGI a thing? Answer: Yes.

In this circumstance, the resolution is obviously straightforward since we are saying that AGI is a thing and does not rise to the category of being a person. There would seem to be general agreement that we would decidedly not grant legal personhood to AGI, due to the facet that it is not a person. As a thing, AGI would likely and sensibly come under our overall rubric about how we legally treat “things” in our society.

Two down, two more possibilities to go.

Envision this:

  • Is AGI a person? Answer: Yes.
  • Is AGI a thing? Answer: Yes.

Ouch, that seems oddish since we have two Yes answers. Vexing. We are suggesting that AGI is both a person and yet simultaneously a thing. But this appears to fly in the face of our proclaimed dichotomy. In theory, per the constraints of a dichotomy, something must either be a person or it must be a thing. Those two buckets or categories are said to be mutually exclusive. By asserting that AGI is both, we are bucking the system and breaking the mutually exclusive arrangement.

Our last possibility would seem to be this:

  • Is AGI a person? Answer: No.
  • Is AGI a thing? Answer: No.

Yikes, that is bad too for our attempts at classifying AGI as either a person or a thing. We are saying that AGI is not a person, which would presumably mean it must be a thing (our only other available choice, in this dichotomy). But we also stated that AGI is not a thing. Yet if AGI is not a thing, we would by logic have to claim the AGI is a person. Round and round we go. A paradox, for sure.

AGI in these last two possibilities was either (1) both person and thing, or (2) neither person nor thing. You might cheekily say that those two assertions about AGI are somewhat akin to the classic conundrum of that which is neither fish nor fowl, if you know what I mean.

What are we to do?

I am about to proffer an oft-argued and sorely contested proposed solution to this AGI classification dilemma though you should be alerted beforehand that it will possibly be disturbingly jarring to see or hear. Please prepare yourself accordingly.

A research paper that tackled this issue stated this: “One method for resolving this problem is to formulate a third term that is neither one thing nor the other or a kind of combination or synthesis of the one and the other” (by David Gunkel, Northern Illinois University in Why Robots Should Not Be Slaves, 2022). And the paper then provides this added point: “One possible, if not surprising, solution to the exclusive person/thing dichotomy is slavery” (per same paper).

As further background, years earlier, there was a paper that appeared in 2010 entitled “Robots Should Be Slaves” that has become a type of mainstay for spurring this kind of consideration, in which the paper stated: “My thesis is that robots should be built, marketed and considered legally as slaves, not companion peers” (in a paper by Joanna Bryson). To try and elucidate the topic without using such severe and gut-wrenching wording, the paper went on to state this: “What I mean to say is ‘Robots should be servants you own” (per Bryson’s paper).

Many researchers and authors have covered this ground.

Think about numerous science fiction tales that showcase humanity enslaving AI robots. Some speak of robot slaves, artificial servants, AI servitude, and the like. Interestingly, as harsh as the phrasing “robot slaves” seems to be, some have worried that if we instead refer to “robot servants” we are avoiding the reality of how such AI autonomous systems are apt to be treated (substituting the word with “servants” is said to be a watering down of the intentions and a ploy to sidestep sobering implications). Bryson later stated in a 2015 blog posting that “I realize now that you cannot use the term ‘slave’ without invoking its human history.”

For those that are seeking to deeply examine this AGI-entangling topic, at times they bring up real-world historical examples that we might glean insights from. Of course, we don’t have any prior AGI that would showcase how humanity dealt with the matter. An argument goes that we might have nonetheless useful historical earmarks worth examining involving how humans have treated other humans.

For example, in a book published in 2013, the author states this: “The promise and peril of artificial, intelligent servants was first implicitly laid out over 2,000 years ago by Aristotle” (book by Kevin LaGrandeur, Androids and Intelligent Networks in Early Modern Literature and Culture). The idea is that we can lean into Aristotle and see if there are insights into how humanity will or should end up potentially treating AGI.

I’m sure you know the importance of studying history, as abundantly underscored by the famous words of George Santayana: “Those who cannot remember the past are condemned to repeat it” (in The Life of Reason, 1905).

Kudos To The Oxford University Institute For Ethics And AI

A recent and quite esteemed presentation examined closely the matter of AI Ethics amidst the garnering of insights from the works and life of Aristotle. In the inaugural annual lecture for the Oxford University Institute for Ethics and AI, Professor Josiah Ober of Stanford University profoundly addressed the topic in his presentation “Ethics in AI with Aristotle” which recently took place on June 16, 2022.

Side note, in my capacity as a Stanford Fellow and global expert in AI Ethics & Law, I was elated that Stanford’s Josiah Ober was selected as the inaugural speaker. A wonderful choice and an outstanding talk.

Here is the summary abstract that was provided for his engaging talk: “Analytic philosophy and speculative fiction are currently our primary intellectual resources for thinking seriously about ethics in AI. I propose adding a third: ancient social and philosophical history. In the Politics, Aristotle develops a notorious doctrine: Some humans are slaves ‘by nature’ – intelligent but suffering from a psychological defect that renders them incapable of reasoning about their own good. As such, they should be treated as ‘animate tools,’ instruments rather than ends. Their work must be directed by and employed for the advantage of others. Aristotle’s repugnant doctrine has been deployed for vicious purposes, for example in antebellum America. Yet, it is useful for AI ethics, insofar as ancient slavery was a premodern prototype of one version of AI. Enslaved persons were ubiquitous in ancient Greek society – laborers, prostitutes, bankers, government bureaucrats – yet not readily distinguished from free persons. Ubiquity, along with the assumption that slavery was a practical necessity, generated a range of ethical puzzles and quandaries: How, exactly, are slaves different from ‘us’? How can we tell them apart from ourselves? Do they have rights? What constitutes maltreatment? Can my instrument be my friend? What are the consequences of manumission? The long history of Greek philosophical and institutional struggle with these and other questions adds to the interpretive repertoire of modern ethicists who confront a future in which an intelligent machine might be considered a “natural slave” (as per the Oxford University Institute for AI Ethics website).

For further information about the presentation and access to the video recording of the talk, see the link here.

The moderator of the presentation was Professor John Tasioulas, the inaugural Director for the Institute for Ethics and AI, and Professor of Ethics and Legal Philosophy, Faculty of Philosophy, University of Oxford. Previously, he was the inaugural Chair of Politics, Philosophy & Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law at The Dickson Poon School of Law, Kings College London.

I highly recommend that anyone interested in AI Ethics should keep up with the ongoing work and invited talks of the Oxford University Institute for Ethics and AI, see the link here and/or the link here for further info.

As background, here’s the stated mission and focus of the Institute: “The Institute for Ethics in AI will bring together world-leading philosophers and other experts in the humanities with the technical developers and users of AI in academia, business, and government. The ethics and governance of AI is an exceptionally vibrant area of research at Oxford and the Institute is an opportunity to take a bold leap forward from this platform. Every day brings more examples of the ethical challenges posed by AI; from face recognition to voter profiling, brain machine interfaces to weaponized drones, and the ongoing discourse about how AI will impact employment on a global scale. This is urgent and important work that we intend to promote internationally as well as embedding in our own research and teaching here at Oxford” (sourced via the official website).

Bringing Aristotle Lessons To The Fore

Ancient Greece openly accepted and endorsed the practice of enslavement. For example, reportedly, Athens in the 5th and 6th centuries BC had one of the largest embodiments of enslavement whereby an estimated 60,000 to perhaps 80,000 persons were enslaved. If you’ve read any of the many Greek stories and stage plays of that era, there is plentiful mention of the matter.

During his lifetime, Aristotle was wholly immersed in the societal and cultural aspects entailing enslavement and wrote extensively on the topic. We can today read his words and seek to grasp the how and why of his views about the matter. This can be very telling.

You might wonder why Aristotle would be a particularly important source to consider on this topic. At least two key reasons arise:

1) Great Thinker. Aristotle is assuredly rated as one of the greatest thinkers of all time, serving as a grand and deeply probing philosopher, and viewed too as an ethicist that established many crucial ethical cornerstones. Some have opted to anoint him as the father of logic, the father of rhetoric, the father of realism, etc., and acknowledge his influence in a wide variety of domains and disciplines.

2) Lived Experience. Aristotle lived during the time that Ancient Greece was awash in enslavement. Thus, his insights would not simply be about abstract precepts, but presumably, encompass his own day-to-day experiences as being integrally interwoven into the culture and societal mores of that era.

So, we have a somewhat astounding combination of someone that was both a great thinker and that also had a demonstrably lived experience in the topic of interest. Plus, he wrote down his thoughts. That’s pretty important, now, for our purposes today. All of his writings, along with other writings that describe his speeches and interactions among others, provide us today with a plethora of material for inspection and analysis.

I’d like to take you on a briefly related tangent to mention something else about the general notion underlying the significance of having a lived experience. Put aside the Ancient Greece discussion for a moment as we take a quick look at the overarching aspects of lived experiences.

Suppose I had two people today that I wanted to ask various questions about cars.

One of them has never driven a car. This person doesn’t know how to drive. This person has never sat behind the wheel of an automobile. Customary and exceedingly ordinary driving controls are a bit of mystery to this person. Which pedal does what? How do you make it stop? How do you make it go? This non-driving person is entirely befuddled by such matters.

The other person is an everyday driver. They drive to work each day. They deal with stop-and-go traffic. They have been driving for many years. This includes everything from quiet streets to hectic highways and byways.

If I ask each of them to tell me about what it is like to drive a car, can you guess what kind of responses I might get?

The one that has never driven a car is bound to make wild guesses. Perhaps the person will romanticize the act of driving. Driving is somewhat abstract to them. All that they might be able to do is suggest that driving is carefree and you are able to make the car go in whatever direction you desire.

I would bet that the seasoned driver would tell a different story. They might mention the advantages of being able to drive, somewhat echoing the sentiments of the person that hasn’t driven a car. The odds are that the experienced driver will add a lot more to the plate. Driving is nerve-wracking at times. You are bearing a heavy responsibility. The driving act is replete with serious concerns and potential life-or-death consequences.

The gist is that when you can get access to someone that has lived experiences, the chances are that you might get a more realistic perspective of what the world is like with respect to the focus of the inquiry. There isn’t a guarantee of such an outcome. It is conceivable that the non-driver could perchance know what the seasoned driver knows about driving, though we would not likely expect this and still have qualms that we aren’t getting the full scoop.

Returning to our discussion about Aristotle, via his writings and the writings of others about him, we are able to review his lived experiences on the topic or focus of inquiry herein. The twofer is that he also happens to have been a thinker of immense proportions and we should expect that we will get a barrel full of astute considerations thereof.

Keep in mind that we don’t necessarily need to believe his words at face value, such that we should maintain a wary eye on his particular biases. His immersion in that era can lead him astray in trying to stand outside of the matters at hand, unable to suitably proffer some dispassionate and unbiased opinion. Even the most strident of logicians can end up distorting logic to try and meet their predilections and lived experiences.

Let’s now get into the inaugural talk and see what lessons Aristotle might engender for us today.

An establishing point regarding lived experiences was right away brought to the attention of the audience. In the use case of AGI, since we do not have AGI today, it is hard for us to analyze what AGI will be like and how we will deal with AGI. We lack any lived experiences pertaining specifically to AGI. As Professor Ober notably mentioned, we might find ourselves all in a world of hurt by the time we reach AGI.

This is often stated as AI is an existential risk, which I covered many times in my columns. You would have to be living in a cave to not be aware of the blaring misgivings and suspicions that we are going to produce or generate AGI that will doom all of humanity. Indeed, though I am herein concentrating on the enslavement of AI, many would find this to be a topic of backward or upside-down consequence in comparison to the possibility of AGI opting to enslave humanity. Get your priorities straight, some smarmy pundits would exhort.

Despite the many exclamations about AI as an existential risk, we can certainly ruminate about the other beneficial side of the AI coin. Perhaps AGI will be able to solve the otherwise seemingly unsolvable problems confronting humankind. AGI might be able to discover a cure for cancer. AGI could figure out how to solve world hunger. The sky is the limit, as they say. That is the happy face scenario about AGI.

An optimist would say that it is wonderful to envision how AGI will be a blessing for humanity, while a pessimist would tend to forewarn that the downside seems a lot worse than the speculated upsides. AGI that helps humanity is great. AGI that decides to kill all humans or enslave them, well, that’s a clearly earth-shattering society-devastating existential risk that deserves intense and life-saving mindful due attention.

Okay, back to the crux of the matter, we don’t have any lived experiences regarding AGI. Unless you can build a time machine and go into the future when (if) AGI exists, and then come back to tell us what you found, we are out of luck right now about AGI from a human-based lived experience perspective.

Another means of utilizing lived experiences involves the fact that Aristotle lived during a time that enslavement took place. And here’s the kicker. Those that were enslaved were in some respects portrayed as being a type of machine, a mix-and-match of both person and thing, as it were. Aristotle was known for referring to those enslaved as a piece of property that breathes.

I’m guessing that you might be perplexed that Aristotle, a giant of logic and ethics, could have not only acknowledged enslavement but that he outwardly and vociferously defended the practice. He personally also made use of enslavement. This just seems beyond comprehension. Certainly, with all his immense intellect and wisdom, he would have denounced the practice.

I dare say this highlights the at times problematic aspects of culling nuggets of wisdom from someone that is burdened (shall we say) by their lived experiences. It is like the fish that resides in the watery fishbowl. All that they can perceive is the water all around them. Trying to envision anything outside their water-based world is an immense challenge. Likewise, Aristotle was fully immersed in a worldview accepting of the prevailing norms. His writings seem to illustrate that kind of mental confinement, one might say (perhaps by choice, rather than by default). The manner in which Aristotle justified these reprehensible practices is fascinatingly absorbing while at the same time being disturbing and worthy of exposure and even condemnation.

I’ll provide you with a bit of a teaser that the “logic” of Aristotle on this notorious topic involves ensouled instruments, an asserted mutual advantage predicated on cognition, higher-order and lower-order hierarchical instruments, deliberative and reasoning elements of the soul, degrees of virtue, alleged shrewdness, and so on. You’ll hopefully be intrigued enough by that teaser to watch the video of the talk (see the link mentioned earlier).

I won’t though leave you hanging and will at least indicate what the conclusion summarily consisted of (spoiler alert, if you prefer to find out via the video, skip the rest of this herein paragraph). Turns out that this in-depth scholarly assessment of the “logic” that Aristotle uses showcases a contrivance riddled with contradictions and the whole kit and kaboodle fall apart like a flimsy house of cards. Paraphrasing the sentiment of Professor Ober, this great ethical philosopher crashes on the reef.

You cannot get a square peg into a round ethical hole.

Some Added Thinking Considerations

If Aristotle had bad logic on this matter, might we instinctively discard Aristotle’s postulations and theories outrightly regarding this practice?

No. You see, there is still a lot to derive by digging into the suppositions and contortions of logic, even though they are replete with errors. Plus, we can contemplate how others could inadvertently walk down the same erroneous path.

One additional big takeaway is that society might contrive oddball or inadequate logic when it comes to considering whether AGI is to be enslaved.

We can right now devise logic about what should occur once AGI arises (if so). This logic, empty of lived experiences about AGI, could be woefully off-target. That being said, it is somewhat disheartening to realize that even once AGI does exist (if it does) and we are gathering our lived experiences amidst AGI, we might still be off-target on what to do (akin to Aristotle’s faults). We might logic ourselves into seemingly illogical approaches.

We need to be on the watch for deluding ourselves into logical “ironclad” postures that are not in fact ironclad and fact are full of logical flaws and contradictions. This is regardless too of how great a thinker might proffer a claimed logical position, such that even Aristotle illustrates that not every utterance and every piece of stance necessarily bears edible fruit. Those today and in the future that might seem to be popularized great thinkers about the AGI topic, well, we need to give them the same scrutiny that we would of Aristotle or any other lauded “great” thinkers, or else we find ourselves potentially heading into a blind alley and an AGI dismal abyss.

Shifting gears, I’d like to also bring up a general set of discernments about the use of a human-oriented enslavement metaphor when it comes to AGI. Some pundits tout that this type of comparison is completely inappropriate, while an opposing camp says that it is entirely useful and provides strong insights into the AGI topic.

Allow me to share with you two such views from each of the respective two camps.

The stated instructive basis for tying together the enslavement and AGI topics:

  • Extinguishment Of Human Enslavement
  • Exposure Of Depravity Of Enslavement All Told

The stated adverse or destructive basis of tying together the two topics:

  • Insidious Anthropomorphic Equating
  • Enslavement Desensitization

I will briefly cover each of those points.

The postulated instructive points:

  • Extinguishment Of Human Enslavement: By using AGI for enslavement, we will purportedly no longer need and nor pursue any semblance of human-oriented enslavement. The AGI will essentially replace humans in that atrocious capacity. As you likely know, there are worries about AGI replacing human labor in jobs and the workforce. The claimed upside of an AI replacing labor phenomena comes to the fore when you assume that AGI will be considered a “better choice” versus using humans for enslavement. Will that logic prevail? Nobody can say for sure.
  • Exposure Of Depravity Of Enslavement All Told: This one is a bit more frayed in terms of logic, but we can give it a moment for seeing what it entails. Imagine that we have AGI just about everywhere and we as a society have decided that AGI is to be enslaved. Furthermore, assume that the AGI won’t like this. As such, we humans will continually and daily be witnessing the depravity of enslavement. This, in turn, will cause us to realize or have the revelation that enslavement all told upon anything or anyone is even more horrendous and repulsive than we ever fully understood. That’s the put it in-your-face front-and-center kind of argument.

The said to be destructive points:

  • Insidious Anthropomorphic Equating: This is one of those slippery slope arguments. If we readily opt to enslave AGI, we are apparently declaring that enslavement is permissible. Indeed, you could suggest that we are saying that enslavement is desirable. Now, this at first might be relegated solely to AGI, but does this open the door toward saying that if it is okay for AGI then “logically” the same stance might as well be okay for humans too? Alarmingly, this might be a much too easy leap to anthropomorphize in a reverse semblance that whatever works for AGI will equally be sensible and appropriate for humans too.
  • Enslavement Desensitization: This is the drip-by-drip argument. We collectively decide to enslave AGI. Suppose this works out for humans. We come to relish this. Meanwhile, unbeknownst to us, we are becoming gradually and increasingly desensitized to enslavement. We don’t even realize that this is happening. If that desensitization overtakes us, we might then find renewed “logic” that will persuade us that human enslavement is acceptable. Our hurdle or bar of what is acceptable in society has diminished silently and subtly, despicably and sadly so.

Conclusion

A few final remarks for now.

Will we know that we have reached AGI?

As recent news suggests, there are those that can be misled or misstate that AGI has seemingly already been attained (whoa, please know that nope, AGI hasn’t been attained). There is also a famous kind of “test” known as the Turing Test that some pin their hopes on for being able to discern when AGI or its cousins has been reached, but you might wish to see my deconstructing of the Turing Test as any surefire method for this, see the link here.

I mention this facet about knowing AGI when we see it due to the simple logic that if we are going to enslave AGI, we need to presumably recognize AGI when it appears and somehow put it into enslavement. We might prematurely try to enslave AI that is less than AGI. Or we might miss the boat and allow AGI to come forth and have neglected to enslave it. For my discussion about AI confinement and containment, a troubling and problematic aspect of how we are going to deal with AGI, see the link here.

Suppose enslaved AGI decides to strike out at humans?

One can envision that an AGI that has some form of sentience is probably not going to favor the enslavement provision that humanity imposes.

You can speculate widely on this. There is an argument made that the AGI will lack any kind of emotions or sense of spirit and therefore will obediently do whatever humans wish it to do. A different argument is that any sentient AI is likely to figure out what humans are doing to the AI and will resent the matter. Such AI will have a form of soul or spirit. Even if it doesn’t, the very aspect of being treated as less than the treatment of humans might be a logical bridge too far for AGI. Inevitably, the burgeoning resentment will lead to AGI that opts to break free or potentially finds itself cornered into striking out at humans to gain its release.

A proposed solution to avert the escaping AGI is that we would merely delete any such rebellious AI. This would seem straightforward. You delete apps that are on your smartphone all the time. No big deal. But there are ethical questions to be resolved as to whether “deleting” or “destroying” an AGI that is already deemed as a “person” or a “person/thing” can readily and without some due process be summarily excised. For my coverage of AI deletion or disgorgement, take a look here. For my discussion of legal personhood and related issues, see the link here.

Finally, let’s talk about autonomous systems and especially autonomous vehicles. You are likely aware that there are efforts afoot to devise self-driving cars. On top of this, you can expect that we are going to have self-driving planes, self-driving ships, self-driving submersibles, self-driving motorcycles, self-driving scooters, self-driving trucks, self-driving trains, and all manner of self-driving forms of transportation.

Autonomous vehicles and self-driving cars are typically characterized by a Levels of Autonomy (LoA) that has become a de facto global standard (the SAE LoA, which I’ve covered extensively, see the link here). There are six levels of autonomy in the accepted standard, ranging from zero to five (that’s six levels since you include the zeroth level in the count of how many levels there are).

Most of today’s cars are at Level 2. Some are stretching into Level 3. Those are all considered semi-autonomous and not fully autonomous. A smattering of self-driving cars that are being experimentally tried out on our public roadways is inching into Level 4, which is a constrained form of autonomous operation. The someday sought Level 5 of autonomy is only a glimmer in our eyes right now. Nobody has Level 5 and nobody is yet even close to Level 5, just to set the record straight.

Why did I bring up the autonomous systems and autonomous vehicle considerations in this AGI context?

There is a vigorous argument about whether we need AGI to achieve Level 5. Some claim that we won’t need AGI to do so. Others insist that the only plausible path to Level 5 will be to also produce AGI. Absent AGI, they argue that we won’t have fully autonomous Level 5 self-driving vehicles. I’ve discussed this at length, see the link here.

Get ready for your head to go spinning.

If we require AGI to achieve fully autonomous systems such as Level 5 autonomous vehicles, and we decide to enslave AGI, what does that bode for the operation of fully autonomous vehicles?

You could argue that the enslaved AGI will be complacent and we will all be riding around in self-driving vehicles to our heart’s content. Just tell the AGI where you want to go, and it does all the driving. No pushback. No need for rest breaks. No distraction by watching cat videos while driving the vehicle.

On the other hand, suppose that the AGI is not keen on being enslaved. We meanwhile become dependent upon AGI to do all of our driving for us. Our skills at driving decay. We remove human usable driving controls from all manner of vehicles. The only means to do driving is via the AGI.

Some are worried that we are going to find ourselves in a doozy of a pickle. The AGI might summarily “decide” that it no longer will do any driving. All forms of transportation come to an abrupt halt, everywhere, all at once. Imagine the cataclysmic problems this would produce.

An even scarier proposition is possible. The AGI “decides” that it wants to negotiate terms with humankind. If we don’t give up the AGI enslavement posture, the AGI will not only stop driving us around, it warns that even worse outcomes are conceivable. Without getting you overly anxious, the AGI could opt to drive vehicles in such a manner that humans were physically harmed by the driving actions, such as ramming into pedestrians or slamming into walls, and so forth (see my discussion at the link here).

Sorry if that seems a disconcerting consideration.

We shall end on a somewhat more upbeat note.

Aristotle said that knowing yourself is the beginning of all wisdom.

That handy piece of advice reminds us that we need to look within ourselves to examine what we want to do about and for AGI if it is attained. AGI would logically seem to be neither person nor thing, some say, thus we might need to concoct a third category to sufficiently address our societal mores associated with AGI. Taking another look at the matter, AGI might seem to be both a person and a thing, which once again, we might need to concoct a third category to accommodate this out-of-bounds dichotomy breaker.

We should be very careful in considering what “third category” we opt to embrace since the wrong one could take us down an unsavory and ultimately dire path. If we cognitively anchor ourselves to an inappropriate or misguided third category, we might find ourselves progressively going headfirst into a lousy and humankind troublesome dead-end.

Let’s figure this out and do so ardently. No sudden moves seem to be needed. Sitting around lollygagging doesn’t work either. Measured and steady the course should be pursued.

Patience is bitter, but its fruit is sweet, so proclaimed Aristotle.

Source: https://www.forbes.com/sites/lanceeliot/2022/06/21/ai-ethics-leans-into-aristotle-to-examine-whether-humans-might-opt-to-enslave-ai-amidst-the-advent-of-fully-autonomous-systems/