In today’s column, I will be covering one of those mind-bending use cases of the latest in high-tech and Artificial Intelligence (AI) that garners powerful reactions ranging from a bona fide ingenious idea to an absurd and entirely preposterous notion. It has to do with the latest type of AI that has taken the world by storm, namely generative AI such as the widely and wildly popular ChatGPT made by AI maker OpenAI. And it has to do with something else that is quite a bit in the news these days and presents an imposing societally vexing matter.
Are you ready for the topic?
Here it is.
- Proposition: Proceed to embed generative AI such as ChatGPT into so-called “smart handguns” to aid in the use of such a weapon for valid and good purposes while seeking to prevent potential uses entailing bad or evil purposes.
There you go.
What do you think of the proposed merging of generative AI and handguns?
Decidedly, the reaction to this combination is all over the map.
Some would right away argue that this makes perfectly good sense and might undercut all of those circumstances whereby a handgun is used in nefarious ways. The generative AI would presumably ascertain what is a proper use of the gun versus an improper use of the gun, doing so in real-time and as befits the situation at hand. This could then be conveyed or communicated to the person holding the gun. Perhaps dissuading them from using the firearm if so persuaded.
Another possibility would be that the generative AI is able to auto-lock the weapon so that it cannot be fired at all. The more extreme capacity would be that the generative AI can directly fire the weapon, regardless of whether a human is pulling the trigger or not. You see, there is an array of viable means to set up the generative AI as either solely a “voice of reason” or to be actively able to control the gun.
Yikes, a retort immediately arises, this is pure craziness.
You are allowing an AI system to make potentially life or death determinations. If the generative AI renders an inappropriate choice, the person holding the gun might lose their life due to the delay caused by the AI. Humans need to make these kinds of decisions. Putting AI into the loop is a rotten idea and is fraught will all manner of problems and adverse outcomes. Don’t even think about doing something like this. Period, full stop.
There is admittedly a lot to unpack on this thorny and avidly controversial matter.
I’ll start by bringing you up-to-speed about generative AI and also proffer some helpful background about ChatGPT.
I’d guess that you already vaguely know that generative AI is the latest and hottest form of AI. There are various kinds of generative AI, such as AI apps that are text-to-text based, while others are text-to-video or text-to-image in their capabilities. As I have predicted in a prior column, we are heading toward generative AI that is fully multi-modal and incorporates features for doing text-to-anything or insiders say is text-to-X, see my coverage at the link here.
In terms of text-to-text generative AI, you’ve likely used or almost certainly know something about ChatGPT by AI maker OpenAI which allows you to enter a text prompt and get a generated essay in response. For my elaboration on how this works see the link here. The usual approach to using ChatGPT or other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur.
Into all of this comes a plethora of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’m going to try and lay out the myriad of sides to this debate about the intertwining of generative AI and handguns.
You can decide which viewpoint seems most compelling or sensible to you. The emphasis here will be to get as much onto the table about these two mighty topics as I can squeeze into the space limitations of my ongoing column on the latest in AI. My ongoing column especially covers the newest and at times exceedingly challenging insights gleaned from AI Ethics and AI Law.
Here’s a taste of what I’ll be covering herein:
- 1) Is It Feasible Or Infeasible To Embed Generative AI Into A Handgun
- 2) What Would The Generative AI Do As Embedded Into A Handgun
- 3) Concerns Over False Positives And False Negatives Via The Generative AI
- 4) Errors, Falsehoods, Biases, And AI Hallucinations Of Generative AI Enter Into The Picture
- 5) Handguns With Generative AI Versus Handguns Lacking Generative AI
- 6) AI Ethics And AI Law Wrestling With AI Used In Weaponry
- 7) Cyberhackers And Handguns Containing Generative AI
- 8) Other Considerations
I will also be discussing aspects of how ChatGPT is illustrative of how generative AI works. Keep in mind though that there are many other generative AI apps besides ChatGPT. Different generative AI apps by differing AI makers can do entirely different renditions and responses related to these considerations.
Vital Background About Generative AI
Before I get further into this topic, I’d like to make sure we are all on the same page overall about what generative AI is and also what ChatGPT and its successor GPT-4 are all about. For my ongoing coverage of generative AI and the latest twists and turns, see the link here.
If you are already versed in generative AI such as ChatGPT, you can skim through this foundational portion or possibly even skip ahead to the next section of this discussion. You decide what suits your background and experience.
I’m sure that you already know that ChatGPT is a headline-grabbing AI app devised by AI maker OpenAI that can produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application. This type of AI is classified as generative AI due to generating or producing its outputs. ChatGPT is a text-to-text generative AI app that takes text as input and produces text as output. I prefer to refer to this as text-to-essay since the outputs are usually of an essay style.
Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. To know more about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.
There are four primary modes of being able to access or utilize ChatGPT:
- 1) Directly. Direct use of ChatGPT by logging in and using the AI app on the web
- 2) Indirectly. Indirect use of kind-of ChatGPT (actually, GPT-4) as embedded in Microsoft Bing search engine
- 3) App-to-ChatGPT. Use of some other application that connects to ChatGPT via the API (application programming interface)
- 4) ChatGPT-to-App. Now the latest or newest added use entails accessing other applications from within ChatGPT via plugins
The capability of being able to develop your own app and connect it to ChatGPT is quite significant. On top of that capability comes the addition of being able to craft plugins for ChatGPT. The use of plugins means that when people are using ChatGPT, they can potentially invoke your app easily and seamlessly.
I and others are saying that this will give rise to ChatGPT as a platform.
As noted, generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
There are numerous concerns about generative AI.
One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).
Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.
There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.
Do not anthropomorphize AI.
Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.
One final forewarning for now.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that President Abraham Lincoln flew around the country in a private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
Generative AI Such As ChatGPT For Use In Smart Handguns
We are ready to further unpack this fascinating and significant matter.
First, I’ve previously covered in my columns that there are various notably prohibited uses of ChatGPT, as stated by OpenAI in their licensing stipulations when you make use of ChatGPT, see my review at the link here.
Included in the list is this indication:
- “OpenAI prohibits the use of our models, tools, and services for illegal activity.”
- “Activity that has high risk of physical harm, including: Weapons development, military and warfare.”
- “Generation of hateful, harassing, or violent content”
- Etc.
All in all, the list of prohibited uses would seem to put the kibosh on considering using ChatGPT as embedded into a handgun.
The gambit can nonetheless still be attempted.
The crux is that if an AI maker discovers that this form of usage is taking place, they could revoke access to the generative AI app. Trickery might be tried by finding ways to hide the usage or aiming to camouflage that the particular AI app is being used. Another avenue is to go rogue and proceed despite getting revoked from usage, but this seems unlikely as a functional gambit.
A different approach consists of using some other generative AI app. Perhaps some other AI makers won’t have as strict a set of rules or be as diligent about enforcing their rules. On top of this, the seemingly most likely path would be to use a freely available open-sourced generative AI. Grab up a copy of the open-source generative AI, make use of it, and even if the open-source library has some rules about proper versus improper uses, the chances are that getting dinged for the use is not going to prevail nor be earnestly pursued (maybe, depending upon the nature of the open-source provisions).
One means of trying to persuade an AI maker to allow the use of their AI app might be to note that the use of generative AI embedded into a handgun could end up saving lives. The handgun manufacturer or entity seeking to do this embedding could appeal to the chances of preventing loss of life.
What AI maker in their right mind is going to willingly appear as though they are blocking the use of their generative AI when lives are to be preserved and society can be a safer place?
A daunting posture to be in.
Of course, the other side of the coin consists of the generative AI inadvertently causing loss of life. We will examine those possibilities shortly. An AI maker would readily be able to make the reasoned case that the dangers presented by using generative AI in this manner are outweighed by the benefits, though some would argue that this is not so and that the benefits exceed the anticipated downsides.
There is another OpenAI rule regarding ChatGPT usage that could be invoked if this gun embedding is somehow undertaken.
If the entity or gun maker were to get into legal trouble with those that believe they were harmed by the usage, and if those so harmed decide to sue, the entity or gun maker could be on the hook for their own legal bills and likewise owe OpenAI for their legal expenses as a result of the licensing indemnification clause, which I discuss at the link here. The legal costs could be enormous. That alone would seem to make any legitimate entity think twice before proceeding on this intertwining.
Here’s something else to keep in mind.
I have previously discussed that ChatGPT and other generative AI are leaky regarding private data that you might enter and can also undercut your data confidentiality, see the link here. The question arises as to whether your prompts or entries entered by you or as generated by the generative AI such as ChatGPT would be private and confidential or not.
Maybe not.
Allow me a moment to explain this and also expand on how generative AI might be used in a smart handgun.
Suppose that a generative AI app is included in a handgun. Assume further that the generative AI is the text-to-text or text-to-essay style of generative AI (we don’t have to restrict this and it could be multi-modal, but for ease of discussion, let’s make that presumption). Rather than typing text into a screen or keyboard, the handgun is fitted with a microphone and a small speaker. Akin to Siri or Alexa, the generative AI will use capabilities that allow the person to speak to the generative AI, wherein the speech is converted into text, and the generative AI will speak back, converting the text into speech.
A person picks up the handgun and the generative AI activates.
Imagine this scenario. The generative AI asks the person why they have picked up the gun. The person tells the generative AI app that they are thinking of harming themselves. Upon examining this remark, the generative AI seeks to talk the person out of this endangering action. An interactive dialogue occurs. The person is eventually convinced by the generative AI that this is not the way to proceed, and they, fortunately, put down the gun.
Realize that as mentioned earlier, the dialogue with the generative AI is being undertaken via the mathematical and computational pattern-matching functionality of the AI app. The AI is not sentient. It is not as though a human or human-like contrivance is engaged in this dialogue. It is a pattern-matching AI mechanism. The AI app in this context is essentially undertaking a mental health advisory function, serving as a means of dissuading the person from adverse uses of the handgun.
If you are overall interested further in how “mental health advice” is being dispensed by generative AI such as ChatGPT, see my coverage at the link here and the link here, just to name a few. It is a use of generative AI that is quite controversial in its own right. Some condemn it, and some applaud it.
I’ll bring up a twist pertaining to privacy and confidentiality and the use of generative AI that matches this scenario.
The interaction that the person had with the gun-embedded generative AI has pretty much gone into the Borg, as it were. The person might not realize that perhaps their entire interaction is now fully available to the AI maker and their AI developers. Furthermore, the prompts that were entered (i.e., whatever the person said to the generative AI), often is used to augment the data training of the generative AI. Thus, personal thoughts as entered prompts can become part of the overall pattern-matching scheme of the generative AI. This could be “memorized” and later be used by the generative AI in responding to other users of the AI app.
Few users tend to realize that they are “contributing” to the generative AI app. They also tend to not be aware of the usage policies and terms of use. In short, they might falsely assume that whatever they enter will be kept strictly private and confidential. Don’t bet on it. Recent announcements by OpenAI have further aimed to clarify matters of data privacy and confidentiality in general concerning ChatGPT and the successor GPT-4, see the link here.
The person that had the dialogue with the gun-embedded generative AI has potentially transmitted their conversation such that the AI maker could access it.
What should the AI maker do?
Consider the plethora of problematic questions, such as:
- Would we expect that the AI maker should be monitoring all such inputs and then alert authorities that the person had an endangering chat with the AI?
- Is this an intrusion on their privacy?
- Can the AI maker be held liable if the generative AI wasn’t able to dissuade the person?
- Can the AI maker be liable for any wide range of uses of the handgun for which the generative AI failed to stop the person?
- Conversely, can the AI maker be liable if the generative AI convinced the person to not use the handgun but the outcome was worse such as when defending against a lethal threat?
- Same if the generative AI had earlier detected the intentions and the AI maker did nothing to intervene?
And so on.
It’s a proverbial ethical and legal hornet’s nest, for sure.
I’ll make a few more remarks and then I’ll dive into the earlier stated list of key points to be made on this matter.
The usual way of making use of generative AI such as ChatGPT consists of doing so with the overarching generalized version of the generative AI. That is the norm. Various developers and companies are opting to tune generative AI such as ChatGPT to particular domains of use, such as for aiding you in business matters, tax advice, or for cooking or hobbies, and any number of focused domains. The tuned version can be further enhanced by all-out customization.
I divide the gun-embedded use of generative AI into these three realms:
- 1) Generic generative AI. General and widely used generative AI such as ChatGPT accessed to perform interactive conversations and dialoguing related to potential gun use but not specifically built or devised for that purpose.
- 2) Handgun-Tuned generative AI. A company or AI developer takes a generic generative AI and augments it with plugins or add-ons to hone toward generating gun-specific conversations and responses.
- 3) Handgun-Customized generative AI. A company or AI developer makes a fully customized generative AI that is solely aimed to perform gun-related conversations and responses.
Another intriguing angle is whether the gun-specific tuned or customized generative AI would wander outside of gun-related interactions. In other words, a person picks up such a gun and wants to discuss a completely non-gun pertinent topic. Maybe they want to discuss something about cooking a nice meal. Whereas a generic generative AI would engage in that type of dialogue, the question is whether the gun-embedded variant would steer away from anything other than something seemingly gun-related.
Lots of combinations and permutations arise.
The Big Picture On Gun-Embedded Generative AI
I have tried to pull together the most often indicated aspects of how generative AI and smart handguns might be intertwined. I’ve not covered every possibility. Space limitations force me to focus on the salient and evocative ones. I’m sure that you might encounter other possibilities.
If there is sufficient reader interest, I’ll do a follow-up to this discussion and include other such points, and get more deeply into each point.
I earlier identified these major talking points or big-picture perspectives that are worthy of discussion on this weighty topic:
- 1) Is It Feasible Or Infeasible To Embed Generative AI Into A Handgun
- 2) What Would The Generative AI Do As Embedded Into A Handgun
- 3) Concerns Over False Positives And False Negatives Via The Generative AI
- 4) Errors, Falsehoods, Biases, And AI Hallucinations Of Generative AI Enter Into The Picture
- 5) Handguns With Generative AI Versus Handguns Lacking Generative AI
- 6) AI Ethics And AI Law Wrestling With AI Used In Weaponry
- 7) Cyberhackers And Handguns Containing Generative AI
- 8) Other Considerations
Let’s briefly unpack those worthy points.
1) Is It Feasible Or Infeasible To Embed Generative AI Into A Handgun
A gadfly would roar that this talk of embedding generative AI into a handgun is hogwash since there is allegedly no technologically feasible way to do so.
They would be wrong.
This can indeed be undertaken.
Let’s first consider the latest wave of new so-called smart handguns that are entering the marketplace today.
These high-tech handguns have a variety of sensors for detecting that the person shooting the gun is the person or persons that are seemingly authorized to be able to actively use the gun. Onboard the weapon is a set of computer chips and sensors that detect aspects such as a fingerprint and also do facial recognition akin to using an ATM at the bank.
The gun won’t magically know who is authorized to use the weapon.
You usually need to first do a series of parameter-setting activities including having the onboard computer record your fingerprint and take a photo to record your facial features. Once you’ve done the proper initializations, from then on the handgun will only allow the firing of the piece when it is your fingerprint and/or your face that is detected by the weapon. Additionally, some of these handguns allow you to connect to the gun via Bluetooth or Wi-Fi so that you can transmit the identification data into the weapon rather than relying solely on the scanning devices embedded into the piece.
I’m sure that you’ve seen these kinds of handgun high-tech features depicted in sci-fi movies and TV shows. The classic plot twist is when the good guy chops off the hand of the deceased bad guy so that the hero can proceed to use their opponent’s handgun (struggling to fit the dead wrongdoer’s finger next to the trigger).
Science fiction often becomes real-world.
For these high-tech handguns, electrical power is needed for the electronic features to be functional. This typically requires that a tiny battery is included in the weapon, often a rechargeable battery. You can plug in and charge up your handgun when it isn’t otherwise being used (or possibly replace the battery).
Some models will only allow one authorized person, while other models will allow several to be authorized to use the weapon.
The basis for these high-tech infused handguns is that if someone keeps a gun in their bedroom for personal protection, the use of the weapon by anyone else will be prevented. When the authorized person uses the gun, it becomes firmly enabled for firing. A child or even an intruder that manages to find and seeks to use the gun will not be able to fire it, assuming that the high-tech does what it is supposed to do.
You can likely anticipate that one worry by an owner of such a handgun is whether the high-tech might falter at the worst of times. Suppose that the authorized person grabs the gun in haste to deal with an armed intruder. If the fingerprint detection doesn’t recognize the authorized person’s finger or face, the weapon would presumably be locked by the high-tech to prevent firing. Concerns are that fingerprint detection might mistakenly fail to recognize the legitimate finger involved. Facial recognition might also mistakenly fail to identify the face of the authorized person.
Does the prevention of unauthorized users of the weapon outweigh the chances, no matter how slim, of the handgun failing to recognize an authorized user at the moment that it really counts?
Some insist that it most certainly does. Others are doubtful. We’ll need to see how widespread these latest high-tech augmented handguns become.
One teeth-grating quibble is the somewhat outsized reference to these as being “smart” handguns. The word “smart” comes about because there is a computer chip or similar computing processing embedded into the piece. Also, fingerprint detection and facial recognition are usually empowered via AI-based algorithms that perform those types of functions. Thus, the mainstream wording is to say that the handgun differs from a conventional non-tech weapon by being a so-called smart handgun.
But the handgun isn’t especially “smart” in any comprehensive sense of phrasing.
The handgun won’t do anything other than use fingerprint detection and/or facial recognition to mathematically and computationally calculate whether to unlock the gun so that it can be fired. That’s about the only “smarts” involved. For example, if an evildoer is the one that did the original initialization, they can fire the gun whenever they wish to do so. The AI in that case doesn’t distinguish between good people or bad people, nor between appropriate settings when using a weapon versus inappropriate settings.
That’s where generative AI augmentation enters into this equation.
Suppose that we boosted the onboard computing of the handgun so that it could make use of generative AI. This is entirely feasible to do. Sure, you might need to scale down the generative AI app. You might need to upscale the onboard computer and associated computer memory. All of this could add physical heft to the weapon and thus potentially undercut its utility. It certainly is going to add cost to things.
Nonetheless, regardless of those factors, feasibility still prevails. There is a difference between something being feasible versus whether it is affordable and practical in a day-to-day sense. For the moment, focus on the feasibility element.
Generative AI apps are usually so big that they need to be hosted on a large-scale computing cloud service. One somewhat questionable approach would be to have the handgun interact with such a cloud service when the generative AI is active on the weapon. Admittedly, this is a dubious scheme. Imagine that the handgun is trying to connect to Wi-Fi when the user of the weapon is frantically needing to hurriedly use the gun.
This is rife for one of those social media memes. We can generally note that the remote access of a generative AI app for a handgun is generally imprudent at this time. It would pretty much need to be scaled to a quite smaller size to work on the low-end computing onboard the weapon and be able to perform within the likely crucial time constraints involved. For my analysis of the ongoing efforts to reduce the computing footprint of generative AI, see the link here.
So, yes, overall it is feasible to do this.
I’m guessing you might still be hazy as to the intention of doing so.
What does having generative AI onboard a handgun do for humankind?
We shall consider that question next.
2) What Would The Generative AI Do As Embedded Into A Handgun
I had mentioned earlier a hypothetical scenario whereby a person that had a handgun was contemplating using the weapon upon themselves. A “smart handgun” that has only fingerprint detection and facial recognition would allow the person to proceed, assuming that they were an authorized user of the handgun and had done the initialization accordingly
If the handgun also had generative AI, presumably the AI app could converse with the person and try to talk them out of their dire actions. When I say converse, I am not suggesting any iota of sentience by the generative AI. Again, as a reminder, generative AI is a mathematical and computational pattern-matching tool. The interaction would be akin to performing a text-based interaction that you might do with ChatGPT. In the case of the handgun, rather than typing, and as mentioned earlier, there could be a microphone and speaker embedded too so that text-to-speech and speech-to-text would be invoked (similar to Siri and Alexa).
The scenario entails the generative AI providing a “voice of reason” such that the person intending to use the weapon might have a reflective moment before proceeding ahead.
A cynic or skeptic would instantly belittle this possibility. They would exhort that the person might not at all pay attention to the generative AI. Or the person might laugh off the generative AI and ignore whatever it says. Another commentary would be that the person would act so quickly that the generative AI wouldn’t have time to carry on a life-or-death sobering chat.
This brings up a slew of important facets about the design and approach used for the generative AI embedded into the handgun.
For example, suppose that the default was that the handgun was locked from firing until the human was able to placate the generative AI that the gun should be unlocked by the generative AI. An obvious downside is that there might not be sufficient time to do this kind of wrangling in an emergency setting. Another is that suppose the generative AI refused to be convinced and would not unlock the handgun regardless of whatever the person said.
There is also the chance that the person might lie to trick the generative AI into unlocking the handgun.
Envision that a person is surrounded by six imposing figures. The person is fearful for their life. They reach for their handgun that has generative AI loaded into it. The default is that the generative AI has to approve the usage of the gun. Trying to talk generative AI into approval might not be the expeditious course of action.
Anyway, imagine that the person tells the generative AI that the six imposing figures all have guns and are intending to shoot. Certainly, this would get the generative AI to unlock their handgun.
But suppose further that the imposing figures are there to help this person. Suppose they don’t have weapons of any kind. The person is in fact seeking to harm them. The generative AI is tricked into believing the authorized handgun-wielding person and ergo unlocks the weapon for firing.
Not good.
An evident limitation to using generative AI in a handgun is that the generative AI has no grounding related to the context of the situation at hand. All that the generative AI would be able to do is rely upon whatever the authorized user had to say (we’ll also assume that voice detection is involved, such that the generative AI would not be interacting with anyone other than an authorized user, though perhaps in the future this stringency would be relaxed).
Some would argue that once we have fully multi-modal generative AI, the onboard AI app would have a greater latitude when trying to ascertain the situation that is unfolding. Imagine that the handgun has a mini camera for capturing video. A fuller set of sensory apparatus might enable the generative AI to sense and respond in a more robust contextual manner.
Consider turning around one of the aforementioned assumptions about the generative AI serving to unlock the handgun. We could assume that the handgun is by default unlocked and that the generative AI can lock the handgun if it is not convinced of the basis for using the weapon. This though seems like letting the horse out of the barn. The person could seemingly use the weapon and not at all engage with the generative AI.
The likely approach would be to use the standard “smart handgun” features of fingerprint detection and facial recognition in conjunction with generative AI. Here’s how that would go. The person picks up the weapon. By default, it is locked. Fingerprint detection and/or facial recognition determine that this is an authorized user. Normally, this would then unlock the weapon.
With generative AI also onboard, finger detection and/or facial recognition would electronically signal the generative AI that an authorized person is handling the piece. At this stage, the generative takes over. The gun is not yet unlocked for use. The generative AI only gets involved once the weapon is presumed to be in the hands of an authorized user.
Would the drawbacks of requiring the generative AI to be convinced to unlock the weapon outweigh the need to at times to utilize the weapon straightaway?
Well, we are back to the same old question about the tradeoffs involved. The generative AI might aid in preventing the use of handguns that would otherwise take lives needlessly. On the other hand, it might delay the use of such a handgun when it is suitable needed to be used, potentially costing lives as a result.
There is more to be said on this, so let’s move on to the next key point.
3) Concerns Over False Positives And False Negatives Via The Generative AI
We cannot assume that the generative AI is perfect for carrying on a conversation with someone and particularly when in a dire situation encompassing human life-or-death considerations.
Suppose the generative AI calculates from a conversation that the weapon should be unlocked, but it turns out the person using the weapon is doing so to harm innocents. We can also suppose the other side of the coin, namely that the generative AI calculates to not unlock the weapon, and this might allow harm to innocents that would otherwise have been protected if the gun had been unlocked.
These are known as false positives and false negatives.
The generative AI is not going to be flawless in making these determinations. It is one thing to have generative AI proffer a recipe for a meal that has some rough edges, and entirely something else when it comes to making calculated choices of whether to unlock or lock a lethal weapon.
4) Errors, Falsehoods, Biases, And AI Hallucinations Of Generative AI Enter Into The Picture
I mentioned earlier that a known concern about generative AI is that pattern-matching can produce or include errors, falsehoods, biases, and also AI hallucinations.
Here’s how that comes into this setting.
A person that is authorized to use a smart handgun is attempting to use it. They have a legitimate basis for doing so. In discussion with the onboard generative AI, the AI has a hidden computational bias that when certain words are entered or spoken the generative AI will refuse to do whatever the person says. No matter what. It is a deeply hidden pattern that no one realizes exists in the infrastructure of this particular instance of generative AI.
The person attempts to “reason” with the generative AI. Sorry, no deal, the generative AI responds. The hidden bias overrides anything else that logically or sensibly might be pertinent. Remember that generative AI is not sentient and has no common sense or other such cognitive attributes.
The person that has quite good intentions is seeking to use the handgun but is prevented from doing so. A bias within the generative AI has precluded the usage, regardless of any other bona fide need to use the weapon.
That’s an example of what a bias in generative AI might do.
I will leave it to you to envision what might happen if the generative AI encounters an AI hallucination while chatting with the authorized user of the handgun. Heaven help us. The generative AI might try to convince the person to use the handgun in the most troublesome of ways.
This raises a vital consideration. There isn’t any ironclad guarantee that the generative AI won’t provide bad advice. One would assume that the generative AI has been data trained and filtered to try and prevent these dour outcomes, but that still isn’t an ironclad guarantee. For more on the need for better and more rigorous mathematical and computational proofs of correctness for AI and AI safety purposes, see my coverage at the link here.
5) Handguns With Generative AI Versus Handguns Lacking Generative AI
The typical response to learning about any kind of smart handgun is that unless the whole world is forced into using these high-tech infused weapons, and also that there aren’t any remaining non-tech handguns anywhere on Earth, you are putting those with the supposed smart weapon at a distinct disadvantage.
Only once everyone has these high-tech handguns would we all be in the same boat, and only if you also somehow magically did away with all non-tech handguns such that none exist anywhere. Until then, the lesser-tech handgun is undoubtedly going to prevail over the techie-loaded handgun when it comes to overall preference and speed of response.
While the person with the generative AI preloaded handgun is politely discussing world affairs with their weapon, meanwhile a person with a “dumb” straight shooter is going to be unencumbered in their use. You can guess which person will ostensibly win whatever dueling activity is taking place.
This is said to be the case for the handguns that are emerging with fingerprint and facial recognition capabilities. The same is to be said about handguns that would have generative AI on them, likely even more so.
6) AI Ethics And AI Law Wrestling With AI Used In Weaponry
A larger concern is the use of AI in any weapons of any kind, especially when at scale.
I’ve discussed the use of AI in military weapons, see the link here. We are already heading in that direction and there are weapons systems today that incorporate AI. Many AI Ethics issues arise. Likewise, existing and new AI Laws are being devised regarding AI in weaponry of all sizes and shapes.
You might be tempted to think that this is fine, as long as humans remain in the loop and can decide when the weapon is to be used. The problem, as I explain in my coverage of the topic, is that the human in the loop might not be fast enough to make such a decision. In turn, if an opponent has an AI-based weapon that doesn’t enforce a human-in-the-loop proviso, their weapon might wipe you out and your AI weapons before you can respond.
There are numerous other heavy issues afoot.
We are facing tough times ahead. If we don’t use AI in weapons, the odds are that we will fall behind those that do so. But the race of including AI is also upping the chances of AI systems making choices about the ultimate fate of humanity. Machine against a machine. Humans might be the losers in that AI-based world. This is one of the many existential risks associated with advances in AI, see my analysis at the link here.
Returning to the focus herein on handguns, let’s examine what the generative AI might do when embedded into a handgun.
We have these three major possibilities:
- 1) Advisory-only (Generative AI Enabled Handgun). The generative AI interacts with an authorized user of the handgun and unlocks the handgun for use when seemingly suitable to do so, though all manner of caveats applies. A key facet is that the AI does not fire the handgun. This is a task solely for the human using the handgun.
- 2) Semi-Autonomous (Generative AI Enabled Handgun). The generative AI is able to interact with the authorized user of the handgun, and able to unlock the piece. Furthermore, the generative AI is enabled to fire the weapon but only in conjunction with the human agreeing to do so.
- 3) Autonomous (Generative AI Enabled Handgun). The generative AI is able to interact with the authorized user of the handgun, and able to unlock the piece. In addition, the generative AI can fire the weapon, doing so regardless of what the human might have to say.
The scenarios that I was describing earlier are all based on the advisory-only mode. The other two modes open quite an additional can of worms.
7) Cyberhackers And Handguns Containing Generative AI
A conventional non-tech handgun is usually self-contained and will operate based on whatever the bearer of the piece opts to do (other than mechanical failures like gun jamming, etc.). Unless someone manages to physically get a hold of the handgun, it is what it is.
Tech-based handguns potentially introduce the possibilities of cyber-hacking, especially if the weapon has some form of Wi-Fi or Bluetooth electronic communication capability. You might have your smart handgun stored near your bedside table. At night, when you are asleep, suppose a cyber-hacker cracks remotely into your handgun.
In the instance of a handgun that has just fingerprint and facial recognition, presumably the mainstay of what a cyberhacker could do is mess up the detection and recognition so that it wouldn’t recognize you when you try to use the gun. In addition, potentially, the cyberhacker might be able to insert their data into the handgun so they can use it or do some other trickery that could confuse the onboard software.
The chances of that happening would seem exceedingly low.
The cyber-hacker has to somehow discover that your weapon has a remote connection. They would have to find and access the remote connection. They would have to have some basis for doing so such that they want to reconfigure the weapon. All in all, it doesn’t seem like much of a worthwhile effort in terms of garnering money or other leverage.
For a handgun that has generative AI loaded into it, this cyber-hacking might be more worthwhile. The cyber-hacker might want to alter the generative AI so that the AI will try to convince the authorized gun user to do something they would not ordinarily do. All kinds of devious schemes could be attempted.
That being said, it is somewhat farfetched and leaps beyond where we are today. Bottom-line, no matter which way you look at it, cybersecurity will be essential for any high-tech handguns and especially when there is remote access and also generative AI that is included.
A truism if there ever was one.
8) Other Considerations
The notion of using AI in handguns and other individual-oriented firearms has been bandied around for many years by those in the AI Ethics realm. I have covered this topic in various contexts, such as the analysis at the link here and the link here.
A particularly invigorating paper in 2021 entitled “AI Can Stop Mass Shootings, And More” covered some of the prior histories on this topic and then looked at simulations, and said this:
- “We propose to build directly upon our longstanding, prior R&D in AI/machine ethics in order to attempt to make real the blue-sky idea of AI that can thwart mass shootings, by bringing to bear its ethical reasoning. The R&D in question is overtly and avowedly logicist in form, and since we are hardly the only ones who have established a firm foundation in the attempt to imbue AI’s with their own ethical sensibility, the pursuit of our proposal by those in different methodological camps should, we believe, be considered as well. We seek herein to make our vision at least somewhat concrete by anchoring our exposition to two simulations, one in which the AI saves the lives of innocents by locking out a malevolent human’s gun, and a second in which this malevolent agent is allowed by the AI to be neutralized by law enforcement. Along the way, some objections are anticipated, and rebutted“ (in a paper entitled “AI Can Stop Mass Shootings, and More” by Selmer Bringsjord, Naveen Sundar Govindarajulu, and Michael Giancola, February 5, 2021).
Generative AI has upped the ante on these discussions, and you can anticipate a lot more renewed attention occurring accordingly.
Some ardently believe that AI will never be able to act in a human ethically suitable fashion and we are wrong to assume otherwise. We are presumably barking up the wrong tree to try and have AI act in a manner of ethically engaging in a dialogue with a human that is holding a gun. This is a hard enough task for fellow humans to delicately perform, and perhaps a deemed inappropriate task for AI, they would assert.
Time will tell.
Conclusion
I mentioned at the beginning of this discussion that this is one of those topics that evokes diametrically opposing viewpoints. One perspective is that this is all blarney. No one would ever put generative AI into a handgun. It is nonsensical.
The other camp would say that it is not only feasible but also likely to happen. The use of generative AI is going to be found in all manner of things that we use or rely upon. We are just still in the early stages of generative AI. We are also only in the infancy of incorporating generative AI as an embedded component in real-time machines and devices.
You cannot turn back the clock.
Generative AI is here. It is going to be expanded in terms of uses. It will become more advanced at interacting with humans. You don’t necessarily need to cross over into sentient AI, often referred to as Artificial General Intelligence (AGI), and still make use of non-sentient generative AI in a whole host of ways.
Here is your vexing question that I ask you to seriously mull over:
- Is embedding generative AI into a handgun a Frankenstein invention or a means of curtailing societal woes associated with the adverse uses of guns?
This is a decision up to us humans and one that we ought not to let be decided by generative AI. Let’s put our human heads together to puzzle this out.
Before AI beats us to the punch.
Source: https://www.forbes.com/sites/lanceeliot/2023/05/15/combining-generative-ai-chatgpt-into-handguns-triggers-fiery-response/