You might say that society seems nearly obsessed with indestructibility.
We relish movies and sci-fi stories that showcase superhumans that are seemingly indestructible. Those of us that are commonplace non-superhuman people dream about magically becoming indestructible. Companies market products claiming that their vaunted goods are supposedly indestructible.
The famous comedian Milton Berle used to tell a pretty funny joke about items that are allegedly indestructible: “I bought my son an indestructible toy. Yesterday he left it in the driveway. It broke my car.” That’s an uproarious side splitter for those that endlessly are seeking to discover anything that could be somehow contended as being indestructible.
I bring up this rather fascinating topic to cover a matter that is rising quickly as an important consideration when it comes to the advent of Artificial Intelligence (AI). I will pose the contentious bubbling topic as a simple question that perhaps surprisingly has a quite complex answer.
In brief, is AI entirely susceptible to destruction or could there be AI that ostensibly could be asserted as being indestructible or thereabouts?
This is a vital aspect underlying recent efforts dealing with both the legal and ethical ramifications of AI. Legally, as you will see in a moment, doors are opening toward using the destruction of an AI system as a means of providing a legal remedy as to a consequence of some pertinent unlawful or unethical wrong. Note that the field of AI Ethics also is weighing in on the considered use of destruction of AI or the comparable deletion of AI. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Mull this whole conundrum over for a moment.
Should we do be seeking to delete or destroy AI?
And, can we do it, even if we wanted to do so?
I’ll go ahead and unpack the controversial topic and showcase some examples to highlight the tradeoffs involved in this mind-bending quandary.
First, let’s get some language on the table to ensure we are singing the same tune. The suitably lofty way to phrase the topic consists of indicating that we are aiming to undertake AI Disgorgement. Some also use interchangeably the notion of Algorithmic Disgorgement. For sake of discussion herein, I am going to equate the two catchphrases. Technically, you can persuasively argue that they are not one and the same. I think the discussion here can suffice by modestly blurring the difference.
That being said, you might not be readily familiar at all with the word “disgorgement” since it usually arises in a legal-related context. Most law dictionaries depict “disgorgement” as the act of giving up something due to a legal demand or compulsion.
A noted article in the Yale Journal of Law & Technology entitled “Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission” by Rebecca Slaughter, a Commissioner of the Federal Trade Commission (FTC), described the matter this way: “One innovative remedy that the FTC has recently deployed is algorithmic disgorgement. The premise is simple: when companies collect data illegally, they should not be able to profit from either the data or any algorithm developed using it” (August 2021).
In that same article, the point is further made by highlighting some prior akin instances: “This novel approach was most recently deployed in the FTC’s case against Everalbum in January 2021. There, the Commission alleged that the company violated its promises to consumers about the circumstances under which it would deploy facial-recognition software. As part of the settlement, the Commission required the company to delete not only the ill-gotten data but also any facial recognition models or algorithms developed with users’ photos or videos. The authority to seek this type of remedy comes from the Commission’s power to order relief reasonably tailored to the violation of the law. This innovative enforcement approach should send a clear message to companies engaging in illicit data collection in order to train AI models: Not worth it.”
Just recently, additional uses of the disgorgement method have come to the fore. Consider this reporting in March of this year: “The Federal Trade Commission has struggled over the years to find ways to combat deceptive digital data practices using its limited set of enforcement options. Now, it’s landed on one that could have a big impact on tech companies: algorithmic destruction. And as the agency gets more aggressive on tech by slowly introducing this new type of penalty, applying it in a settlement for the third time in three years could be the charm. In a March 4 settlement order, the agency demanded that WW International — formerly known as Weight Watchers — destroy the algorithms or AI models it built using personal information collected through its Kurbo healthy eating app from kids as young as 8 without parental permission” (in an article by Kate Kaye, March 14, 2022, Protocol online blog).
Lest you think that this disgorgement idea is solely a U.S. viewpoint, various assessments of the draft European Union (EU) Artificial Intelligence Act suggest that the legal language therein can be interpreted as allowing for a “withdrawal” of an AI system (i.e., some would say this assuredly amounts to the AI being subject to destruction, deletion, or disgorgement). See my coverage at the link here.
Much of this talk about deleting or destroying an AI system is usually centered on a particular type of AI known as Machine Learning (ML) or Deep Learning (DL). ML/DL is not the only way to craft AI. Nonetheless, the increasing availability of ML/DL and its use has created quite a stir for being both beneficial and yet also at times abysmal.
ML/DL is merely a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
AI and especially the widespread advent of ML/DL has gotten societal dander up about the ethical underpinnings of how AI might be sourly devised. You might be aware that when this latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
How does this tend to arise in the case of using Machine Learning?
Well, straightforwardly, if humans have historically been making patterned decisions incorporating untoward biases, the odds are that the data used to “train” ML/DL reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will blindly try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out (GIGO). The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
Not good.
This is also why the tenets of AI Ethics have been emerging as an essential cornerstone for those that are crafting, fielding, or using AI. We ought to expect AI makers to embrace AI Ethics and seek to produce Ethical AI. Likewise, society should be on the watch that any AI unleashed or promogulated into use is abiding by AI Ethics precepts.
To help illustrate the AI Ethics precepts, consider the set as stated by the Vatican in the Rome Call For AI Ethics and that I’ve covered in-depth at the link here. This articulates six primary AI ethics principles:
- Transparency: In principle, AI systems must be explainable
- Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
- Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
- Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
- Reliability: AI systems must be able to work reliably
- Security and privacy: AI systems must work securely and respect the privacy of users.
As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:
- Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
- Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
In a moment, I will be coming back to the AI Disgorgement topic and will be pointing out that we need to separate the destruction or deletion of AI into two distinct categories: (1) sentient AI, and (2) non-sentient AI. Let’s set some foundational ground on those two categories so we’ll be ready to engage further in the AI Disgorgement matter.
Please be abundantly aware that there isn’t any AI today that is sentient.
We don’t have sentient AI. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here). To those of you that are seriously immersed in the AI field, none of this foregoing pronouncement is surprising or raises any eyebrows. Meanwhile, there are outsized headlines and excessive embellishment that might confound people into assuming that we either do have sentient AI or that we are on the looming cusp of having sentient AI any coming day.
Please realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
All told, we are today utilizing non-sentient AI and someday we might have sentient AI (but that is purely speculative). Both kinds of AI are obviously of concern for AI Ethics and we need to be aiming toward Ethical AI no matter how it is constituted.
In the case of the AI Disgorgement associated with sentient AI, we can wildly play a guessing game of nearly infinite varieties. Maybe sentient AI will cognitively be like humans and exhibit similar mental capacities. Or we could postulate that sentient AI will be superhuman and go beyond our forms of thinking. The ultimate in sentient AI would seem to be super-intelligence, something that might be so smart and cunning that we cannot today even conceive of the immense thinking prowess. Some suggest that our minds will be paltry in comparison. This super-duper AI will run rings around us in a manner comparable to how we today can outthink ants or caterpillars.
If there turns out to be AI that is sentient, we are possibly going to be willing to anoint such AI with a form of legal personhood, see my analysis at the link here. The concept is that we will provide AI with a semblance of human rights. Maybe not verbatim. Maybe a special set of rights. Who knows?
In any case, you could conjure up the seemingly provocative notion that we cannot just summarily wipe out or destroy sentient AI, even if we can technologically do so. Sentient AI might be construed as a veritable living organism in terms of cognitive capacity and innately having a “right to live” (depending upon the definition of being alive). There might ultimately be a stipulated legal process involved. This includes that we cannot necessarily exercise the “death penalty” upon a sentient AI (whoa, just wait until we as a society get embroiled in that kind of a societal debate).
I doubt that we would be willing to make the same AI Ethical posture for the non-sentient AI. Though some are trying to contend that today’s non-sentient AI ought to be classified as a variant associated with legal personhood, this seems to be a steeply uphill battle. Can a piece of contemporary software that is not sentient be granted legal rights on par with humans or even animals? It sure seems like a stretch (but there are advocates fervently aiming for this, see my coverage at the link here).
Here’s what this all implies.
Assuming we don’t grant today’s non-sentient AI as embodying the regal legal anointing of personhood, the choice of deleting or destroying such non-sentient AI would decidedly not be reasonably equated to the destruction of a living organism. The wiping out of non-sentient AI is nothing more than the same as deleting that dating app from your smartphone or erasing those excess pictures of your trip to a wonderland forest from your laptop. You can delete or “destroy” those bits of data and software without having a guilty conscience and without having overstepped the law in terms of having harmed a sentient living creature.
You might assume that this pronouncement summarily settles the AI Disgorgement conundrum as it relates to non-sentient AI.
Sorry, the world is never as straightforward as it might initially seem.
Get ready for a twist.
Suppose that we created a non-sentient AI that was leading us towards being able to cure cancer. The company that had developed the AI did something else that the firm should not have done and has gotten into serious legal trouble with various governmental authorities. As part of a remedy imposed upon the firm, the company is compelled to completely delete the AI, including all data and documentation associated with the AI.
The government took that company to task and assured that those wrongdoers can no longer profit from the AI that they had devised. Unfortunately, in the same breath, we have perhaps shot our own foot because the AI had capabilities that were leading us toward curing cancer. We ended up tossing out the baby with the bathwater, as it were.
The point is that we could have a variety of bona fide reasons to keep AI intact. Rather than deleting it or scrambling it, we might wish to ensure that the AI remains whole. The AI is going to be allowed to perform some of its actions in a limited manner. We want to leverage whatever AI can do for us.
A handy rule would then seem to be that the notion of AI Disgorgement should be predicated on a semblance of context and sensibility as to when this form of a remedy is suitably applicable. Sometimes it might be fully applicable, while in other instances not so. You could also try to find ways to split the apple, perhaps keeping some part of the AI that was deemed as beneficial while seeking to have destruction or deletion for the portions that are considered within the remedy deriving scope.
Of course, doing a piecemeal deletion or destruction is not a piece of cake either. It could be that the part you want to keep is integrally woven into the part you want to have destroyed. Trying to separate the two could be problematic. In the end, you might have to abandon the deletion and simply agree to allow the whole to remain, or you might have to toss in the towel and destroy the whole kit and kaboodle.
It all depends.
Time to tackle another hefty consideration.
We’ve so far covered the issues underpinning the basis for wanting to bring forth an AI Disgorgement. Meanwhile, we have just now sneaked into that discussion the other next important element to consider, namely whether deleting or destroying AI is altogether always feasible.
In the preceding dialogue, we kind of assumed at face value that we can destroy or delete AI if we wanted to do so. The one twist that was mentioned involved trying to separate out the parts of an AI system that we wanted to keep intact versus the parts that we wanted to delete or destroy. That can be hard to do. Even if it is hard to accomplish, we would still be on relatively cogent turf to claim that it inevitably could be technologically attained (we might need to rebuild parts that we destroyed, putting those back into place to support the other part that we didn’t want to destroy).
Slightly change the perspective and ruminate on whether we really always can in fact destroy or delete AI if we wish to do so. Put aside the AI Ethics question and focus exclusively on the technological question of destructive feasibility (I am loath to utter the words “put aside the AI Ethics question” since the AI Ethics question is always a vital and inseparable consideration for AI, but I hope you realize that I am using this as a figure of speech for purposes of directing attention only, thanks).
We’ll make this into two lines of reasoning:
- If we have a sentient AI that we all agree has to be destroyed or deleted, would we technologically have an assured means to do so?
- If we have a non-sentient AI that we all agree has to be destroyed or deleted, would we technologically have an assured means to do so?
I would submit that the answer to both of those questions is a qualified “no” (I’d pretty much be on the rather safe technological ground for saying “no” since there is always a potential chance that we could not destroy or delete the AI, as I will elaborate on next). In essence, a lot of the time the answer would probably be “yes” in the case of non-sentient AI, while in the case of the sentient AI the answer is “maybe, but nobody can say either way for sure” due to not knowing what the sentient AI is going to be or even if it will arise.
In the case of sentient AI, there is a myriad of fanciful theories that can be postulated.
If the sentient AI is superhuman or super-intelligent, you can try to argue that the AI would outsmart us humans and not allow itself to be wiped out. Presumably, no matter what we try, this outsized AI will always be a step ahead of us. We might even try to leverage some human-friendly instance of this sentient super-duper AI to destroy another sentient AI that we are otherwise unable to delete via our own methods. Be wary though that the helpful AI later turns evildoer and we are left at the mercy of this AI that we are hence unable to get rid of.
For those of you that prefer a happy face version of the futuristic sentient AI, maybe we theorize that any sentient AI would be willing to get destroyed and want to actively do so if humans wished it so. This more understanding and sympathetic sentient AI would be able to realize when it is time to go. Rather than fighting its own destruction, it would welcome being destroyed when the time comes for such action. Perhaps the AI does the work for us and opts to self-destruct.
The conjecture about sentient AI can roam in whatever direction you dream of. There aren’t particularly any rules about what is possible. One supposes that the realities of physics and other natural constraints would come to bear, though maybe a super-intelligent sentient AI knows of ways to overcome everything we take for granted as reality.
Speaking of reality, let’s shift our attention to the non-sentient AI of today.
You might be tempted to believe that we can always without fail opt to destroy or delete any of today’s AI. Envision that a company has devised an AI system that governmental authorities order be disgorged. The firm is legally required to destroy or delete the AI system.
Easy-peasy, it seems, just press a delete button and poof, the AI system is no longer around. We do this with no longer needed apps and no longer wanted data files on our laptops and smartphones. No special computer techie skills are needed. The company can comply with the regulatory order in minutes.
We can walk through the reasons why this presumed ease of AI destruction or deletion is not as straightforward as you might initially assume.
First, a notable question surrounds the exact scope of what is meant when you say that an AI system is to be destroyed or deleted. One facet is the programming code that comprises the AI. Another facet would be any data associated with the AI.
The developers of the AI might have generated many versions of the AI while crafting the AI. Let’s simplify things and say that there is a final version of the code that is the one running and has become the target for being disgorged. Okay, the company deletes that final version. The deed is done!
But, turns out that those earlier versions are all still sitting around. It might be relatively child’s play to essentially resurrect the now-deleted AI by merely using one of those earlier versions. You take an earlier version, make modifications to bring it up to par, and you are back in business.
An obvious way to try and prevent this kind of deletion skirting would be to stipulate that any and all prior versions of the AI must be destroyed. This would seem to force the company into seriously finding any older versions and making sure those get deleted too.
One twist is that suppose the AI contained a significant portion of widely available open-source code. The developers had originally decided that to build the AI they would not start from scratch. Instead, they grabbed up a ton of open-source code and used it as the backbone for their AI. They do not own the open-source code. They do not control the open-source code. They only copied it into their AI creation.
Now we have a bit of a problem.
The company complies with the order to destroy their AI. They delete their copy of the code and all versions of it that they possess. They delete all of their internal documentation. Meanwhile, they are not able to get rid of the open-source that comprises (let’s say) the bulk of their AI system since it is not something they legally own and have no direct control over. The firm seems to have done what it could do.
Would you say that the offending AI was in fact destroyed or deleted?
The firm would likely insist that they did so. The governing authority would seem to have a hard time contending otherwise.
They might be able to quickly resurrect the AI by just going out to grab the widely available open-source and adding the pieces by doing some programming based on their knowledge of what the added portions consisted of. They don’t use any of the prior offending code that they had fully deleted. They don’t use any of the documentation that they had deleted. Voila, they have a “new” AI system that they would argue is not the AI that they had been ordered to disgorge.
I trust that you can see how these kinds of cat and mouse games can be readily played.
There are lots more twists.
Suppose the AI that is to be disgorged was based on the use of Machine Learning. The ML could be a program that the company developed on its own, but more likely these days the ML is an algorithm or model that the firm selected from an online library or collection (there are lots and lots of these readily available).
The firm deletes the instance of the ML that they downloaded and are using. The exact same ML algorithm or model is still sitting in a publicly available online library and potentially accessible for comers that want to use it. The governmental authority might have no means to restrict or cause a disgorgement of that online library.
That’s just the start of the difficulties involved in destroying or deleting AI, including for example the use of Machine Learning. As mentioned earlier, ML and DL typically entail feeding data into the ML/DL. If the firm still has the data that they previously used, they could download another copy of the ML/DL algorithm or model from the online library and reconstitute the AI via feeding the data once again into what is essentially the same ML/DL that they had used before.
You might astutely clamor that the data the firm had been using needs to also be encompassed by the disgorgement order. Sure, let’s assume that this is so.
If the data is entirely within the confines of the firm, they presumably would be able to destroy or delete the data. Problem solved, one would say. But, suppose the data was based on various external sources, all of which are outside the scope of the destruction order since they are not owned by and not controlled by the offending firm.
The crux is that you could from other external sources grab copies of the data, grab a copy of the ML/DL algorithm, and reconstitute the AI system. In some cases, this might be expensive to undertake and could require gobs of time, while in other instances it might be doable in short order. It all depends on various factors such as how much the data needs to be modified or transformed, and the same goes for the parameter setting and training of the ML/DL.
We also need to consider what the meaning of destroying or deleting consists of.
You undoubtedly know that when you delete a file or app from your computer, the chances are that the electronically stored item is not yet fully deleted. Typically, the operating system updates a setting indicating that the file or app is to be construed as having been deleted. This is a convenience if you want to bring back the file or app. The operating system can merely flip the flag to indicate that the once seemingly deleted file or app is now active again.
Even if you have the operating system perform a more determined deletion, there is a likelihood that the file or app still sits somewhere. It might be on a backup storage device. It might be archived. If you are using a cloud-based online service, copies are likely residing there too. Not only would you need to find all of those shadow copies, but you would also need to perform various specialized cybersecurity erasure actions to try and ensure that the bits and bytes of those files and apps are completely written over and in a sense truly deleted or destroyed.
Note that I just mentioned the notion of a shadow.
We have at least three types of shadows to be thinking about when it comes to AI disgorgement:
1) Shadow copies of the AI
2) Shadow algorithms associated with the AI
3) Shadow data associated with the AI
Imagine that an order for an AI disgorgement instructs a company to proceed with destroying or deleting the data associated with the AI, but the firm can keep around the algorithm (perhaps allowing this if the algorithm is seemingly nothing more than one that you can find in any online ML library anyway).
Turns out that the algorithm itself essentially can be said to have its own kind of data, such as particular settings that underpin the algorithm. The effort to train the ML will usually entail having the ML figure out what parameter settings need to be calibrated. If you are only ordered to get rid of the training dataset per se, those other data-related parameter settings are likely still going to remain. This suggests that the AI can be somewhat readily reconstituted, or you could even argue that the AI wasn’t deleted at all and you simply got rid of the earlier used training data that perhaps you no longer care about anyway. There is also a high chance that a form of imprint remains from the training data, which I’ve discussed at the link here.
Getting rid of the training data might also be challenging if the data comes from a variety of third-party sources. Sure, you might be able to force the company to delete their in-house instance of the compiled data, but if the data exists at those other sources beyond their scope, the same data could be likely reassembled. This might be costly or might be inexpensive to do, depending upon the circumstances.
Throughout this discussion, we have focused on the notion of having a particular company be the target for undertaking an AI disgorgement. This might be satisfying and serve as an appropriate remedy associated with that company. On the other hand, this is not necessarily going to somehow eradicate or destroy the AI as it might exist or be reconstituted beyond the scope of the targeted company.
The AI might be copied to zillions of other online sites that the company has no means to access and cannot force a deletion to take place. The AI might be rebuilt from scratch by others that are aware of how the AI works. You could even have former employees of the firm that leave the company and opt to reuse their AI development skills to construct the essentially same AI elsewhere, which would be argued by them as based on their knowledge and skills, thus not being an infringing or subjected copy of the AI disgorgement order.
A perhaps apt analogy to the AI disgorgement troubles might be the advent of computer viruses.
The chances of hunting down and deleting all copies of a viral computer virus are generally slim, especially due to the legal questions of where the virus might be residing (such as across international borders) and the technological trickery of the computer virus trying to hide (I’ve discussed the emergence of AI-based polymorphic computer viruses that are electronic self-adapting shape-shifters).
Furthermore, compounding the challenges, there is always the presumed capability of constructing the same or roughly equivalent computer virus by those that are well-versed in the design and crafting of computer viruses all told.
AI Disgorgement is a seemingly handy idea and a potentially viable tool, but the devil is in the details.
At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase the applicability of AI Disgorgement in today’s world. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the applicability of AI Disgorgement, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Disgorgement
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and AI Disgorgement.
A company is developing an AI-based self-driving car and opts to make use of public roadways for doing tryouts of the autonomous vehicle. For a while, the fielded self-driving cars are doing fine. People are going for rides in the fleet. All seems good.
Unfortunately, one of the self-driving cars rams into a bicyclist. Local city leaders and the populace at large are quite upset. Two days later, another bicyclist gets rammed by another one of the self-driving cars from this same fleet. A vitriolic uproar ensures. For similar scenarios that will potentially someday confront city leaders and communities, see the study I co-authored with a Harvard faculty member at the link here.
A government authority declares that the company shall be compelled to enact an AI disgorgement. The AI driving system is to be utterly deleted or destroyed. As a side note, this is a somewhat artificial scenario since there would be other options of what might be done and those would presumably be first considered. The disgorging of AI would seem to be the last resort and also would undoubtedly be legally fought stridently by the self-driving car firm. Please accept the imagined setting for purposes of concentrating on how AI disgorgement might be enacted.
Assume that the company accedes to doing the AI disgorgement. They go ahead and delete the latest copy of the AI driving system. The deed is done.
Well, as earlier identified, the deed is not at all yet completed. The firm deletes all prior versions of the AI driving system. They scour the company servers and delete copies there. They scour the cloud-based servers and delete the copies there. They go to their backups and archives and delete those. They ask that the cloud provider do the same for any backups or archives on the cloud that contain the AI driving system code.
Are we done?
Nope.
There was a lot of data used to train the AI driving system. Vast stores of collected data that contained video and pictures of roadway scenes were used to train the AI driving system. Those are found and deleted. A slew of other data files were all integral to devising the AI driving system. Those are also uncovered and deleted. Backups and archives are dug up and likewise deleted.
The AI disgorgement included all of the AI programs, data, and documentation. So, the company tries to discover all of the online documentation and deletes those documents, including backups, archives, and the like. In addition, the AI developers and others involved in the AI driving system are asked to provide their handwritten notes. Those are then shredded and put into the trash.
Whew, the AI no longer exists.
Kind of.
The AI driving system was based on readily available open-source code. The company has no ability to delete or destroy the open-source per se (they only are able to do so for the copies they held). There is also an ML/DL component of the AI driving system. This ML/DL is based on algorithms licensed on a cloud-based library. The company can only delete the instances that they created and have no means of contending with the cloud-based library. Training data that was used for the ML/DL consists of video and pictures that are readily found on other websites and in specialized openly available datasets.
In terms of the deleted documentation about the AI driving system, this went generally well, though there are likely still pockets of documents here or there that some of the AI developers have on their personal laptops and didn’t fess up to having. In any case, the AI developers in their heads know what the AI consisted of and are able to quickly go over to another self-driving car firm to leverage their expertise there.
Shortly after this AI disgorgement, a competing firm that hired those ex-employees has essentially reconstituted the AI driving system. They did not copy the disgorged AI. They instead independently rebuilt the AI by using the open-source code, the ML/DL libraries, and the openly available datasets.
I guess you could say, one down and more to go, at least from a potential AI Disgorgement perspective.
Conclusion
Though the example suggests that the AI disgorgement can be overcome, do not count out this potential means of a remedy for AI wrongs. Some would argue that the use of AI Disgorgement will be viewed as a visibly potent way to convince other companies that they should be more mindful in how they devise and deploy their AI. They could lose their pot of gold, as it were. This is a handy tool in the arsenal of getting compliance with AI Ethics and devising Ethical AI.
Others clamor that the AI Disgorgement could stifle innovation in the advancement and use of AI. Perhaps AI developers will be hesitant to stretch the envelope if they know that someday their AI could be essentially deleted out of existence. That would be a hard pill to swallow for those that had devoted years of blood, sweat, and tears to crafting the AI.
Numerous other tradeoffs are bandied around. Maybe a competing vendor would try to get a governmental authority to undercut the competition by goading the agency into using AI Disgorgement against other firms in their same industry. And, think too of the cost involved in trying to comply with such an order. There isn’t just a red button that you can push to perform all the deletion and destruction of the AI. Lots of expensive labor might be required and could take weeks or months to undertake.
Time will tell whether this newly emerging means of coping with AI is going to gain traction or fall by the wayside.
Does this meaty discussion about AI Disgorgement imply that AI is potentially indestructible?
You could contend that the phrasing of being destructible or indestructible is somewhat nebulosus when referring to something that is principally electronic. A human being is not indestructible because we could obviously find a means to destroy the human physical form. The same could be said for just about anything that takes on a conventional physical manifestation. I won’t get into the meta-physical philosophizing theories that nothing is ever actually destroyed and that all matter is merely reconstituted into different forms and yet still exists (you are welcome to explore that lofty theorizing if you wish).
Given the ubiquitous nature of today’s computing and the likely ever-expanding use of computing, such as the vaunted Internet of Things (IoT), there are lots and lots of places for an electronically-based AI to hide. For the non-sentient AI of today, the AI might be hidden by those humans that wish to hide it, or programmed to hide by those humans that developed the AI. Heaven knows what the sentient AI might do.
If an AI system can easily make copies of itself and hide those copies and ultimately not be caught, would you be willing to say that this exemplifies a semblance of indestructibility?
Some would argue that despite a whack-a-mole possibility, in the end, you could destroy or delete the AI as long as you can successfully find and wipe out each such appearing or reappearing instance. Ergo, AI is not indestructible. Others would retort that a hiding AI is ostensibly indestructible because it can just keep spawning and the elusive cat and mouse gambit could last nearly forever. You are always going to be confronted with the next instance that you still haven’t squashed.
A famous quote comes to mind: “I was never born and I will never die; I do not hurt and cannot be hurt; I am invincible, immortal, indestructible” (by noted writer and journalist Aravind Adiga).
Could that be applicable to AI?
Those that are handwringing about sentient super-intelligent AI would seemingly say it could.
Source: https://www.forbes.com/sites/lanceeliot/2022/05/09/ai-ethics-and-the-law-are-dabbling-with-ai-disgorgement-or-all-out-destruction-of-ai-as-a-remedy-for-ai-wrongdoing-possibly-even-for-misbehaving-self-driving-cars/