There are some people that seem to have as their life mission the purpose of goading and infuriating the rest of us.
You assuredly know someone of that ilk. They will toss a brazen and uncalled-for statement into a conversation that completely rattles everyone and sparks a verbal brawl. Despite the conversational spattering that is the equivalent of an explosive hand grenade, it might be slimly suggested that such an annoying and disruptive act is merely a sign of a seasoned provocateur. Perhaps the dialogue was altogether mundane and uninteresting, thus the reasoned need for a rebellious effort to enliven the interaction.
On the other hand, it could be that the provocation is nothing more than an attempt to stop any substantive banter. An out-of-the-blue showstopper would seem to accomplish that unseemly goal. By distracting the attention toward some other highly controversial topic, all heck will break loose, and no one will remember the train of thought that just moments ago was the considered focus of the group attention.
Let’s make clear that the interloping statement is going to be an outlandish one. If the interjection was say relevant, or maybe even irrelevant, the key would be whether the statement or assertion is something that could garner a semblance of balanced discussion. Anything in a manner that doesn’t provide abundant and absolute shock and awe will not be satisfying enough to the truly disruptive goader. They seek to come up with the absolutely “best” shocker that will send all participants into a tizzy.
The greater the stoked tizzy, the better.
As you’ll see in a moment, there are plenty of goaders in the realm of Artificial Intelligence (AI). These are people that like to goad others into AI discussions that are not meant for educational or informational purposes, but instead simply as disruptive and exasperatingly false imagery of what AI is and what we need to be doing about AI.
Those goaders are generally messing things up, especially by throwing off those that do not know about AI and sadly undercutting the concerted movement toward Ethical AI. I will also be illuminating another angle on this intertwining of AI, namely that some goaders are using AI-powered online tools for the lamentable purpose of doing the goading on their behalf. As the old saying goes, this seemingly proves that you just cannot give some people new shiny toys (since they are bound to use them improperly).
For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Before we get into the AI topic per se, let’s examine how these goaders overall and surreptitiously carry out their goading tasks.
As you undoubtedly know, devout chatty disruptors are not confined to face-to-face verbal interactions. I’m sure you’ve experienced the same type of behavior online. This type of activity can occur during any series of posted text messages such as when people are electronically responding to a live stream video and stating their opinions about what is going on. The odds are that you’ll get someone that just has to put in their two cents and do so in the unruliest of ways.
The reaction might be that others start jumping onto the newly introduced topic. Step by step, the electronic back-and-forth diverges from discussing the live stream and becomes instead preoccupied with whatever other bombshell the goader has lobbed. A pile-on is bound to occur.
Meanwhile, some of those posting will get frustrated at the ability of this disrupter to hijack the texting commentary. Efforts will be made to get the attention returned to the matter at hand. You will immediately see some that will label the disrupter as a troll, a gaslighting irritant, or possibly referred to as an edgelord.
You might not be familiar with the slang term of being an edgelord. Generally, the terminology refers to someone that posts online and opts to insert some shocking and at times nihilistic remarks. Furthermore, the person doing this is not necessarily a believer in their own remarks. They are often solely and singularly interested in getting people riled up. Whether the remark is a sincere one is immaterial. Just about any cantankerous statement will do, provided that it stirs a hornet’s nest.
What kinds of statements can get others to become distracted and veer into the obtuse spider’s web that the edgelord is trying to weave?
Here are a few handy gaslighting snippets that are often used:
- Life is totally empty of value and meaning (this is a mild one).
- People are idiots and we ought to put a muzzle on stupid people (this is a sparking one).
- Honestly, we need to leave Earth and freshly start things over on another planet (this is insidious).
- Etc.
Consider the semi-clever facets of those gaslighting examples.
Take the first one about the meaning of life. If you interject into nearly any conversation that life is totally empty of value, imagine the reactions you might be able to provoke. Some might respond sympathetically. They are worried that you are possibly despondent and depressed. In an effort of care, they might try to boost the spirits of the person that wrote the comment.
Others might respond by arguing that life is not valueless. They will defend fervently that life is worth living and that we all can add value to everyone around us. This might then take the conversation down a rabbit hole about the various ways in which added value can be derived. Recommendations will start pouring into the dialogue.
Does the edgelord or goader pay attention to these responses in the sense of embracing the extended empathy or stridently reconsidering their posture about the nature of life?
Heck no.
The key to this gaslighting reprobate is that the group has become distracted. In addition, the group is now vigorously impassioned about whatever the miscreant has given as a bone to chew on. That’s the key to success here. Get everyone to shift to the topic that the goading has proffered. See how far the group will go. If needed, keep the distraction fueled.
Refueling will sometimes be needed. In essence, the group might momentarily get sidetracked, but then realize they want to get back to the existing matter at hand. Not so fast, the schemer silently is thinking. They will try to add more ammunition or fuel to the fire.
The added spark might entail responding to those that have taken the bait. Maybe the goader will try to get the sympathizers to realize that the goader is still in the dumps and needs more words of solace. Or perhaps the goader will attempt to refute the claims that life has value. Lots of angles are available to make sure that the distraction keeps rolling and trolling along.
When it seems that no amount of further goading will keep the distraction alive, the edgelord will likely opt to toss another topic into the crowd. For example, consider the second example that I earlier mentioned about the assertion of people being stupid and they ought to be wearing a muzzle. This will really get the goat of some people. They will angrily respond that calling people stupid is wrong and accuse the goader of being intolerant. Some will be utterly aghast at the proclaimed idea of muzzling people, perhaps leading to a lengthy sidetracking on freedom of expression and the rights of humanity.
This all brings up that famous adage about never opting to wrestle with a pig.
Why so?
Because you both get mired in the mud, plus the pig likes it.
In brief, the whole point of the edgelord gaslighting is to get a rise out of others, along with distracting from whatever else was the focus of attention. There is no particular interest in advancing an intelligent dialogue and possibly educating people on any weighty matters. There is no genuine attempt to provide insights and help people to be better off.
The sneakiness involved can be nearly breathtaking. Given that nowadays we are often on the look for those that are purposely attempting to start a verbal fight, those that carry out these devilish efforts have to be more astute than they used to be.
Various tricks can be used:
- Start with a statement that seems connected to the topic at hand, doing so to inch the conversation to oblivion rather than being caught outright trying to do so
- Toss a zinger into the dialogue, but then seem to regret that you did so, offering apologies, and then come back stronger with a revelation that what you originally said is indeed true and worthy
- Claim that someone else brought up the zinger that you are now engaging upon, acting as though you are innocently responding to the “outrageous” comments provided by someone else
- If the respondents are dividing up such that some are supportive and others are in opposition of your stoking remark, jump to the aid of one side and add commentary, wait to see how things go, and then jump over to the other side, acting as though you are being persuaded back and forth
- Seem to retract your initial sparking remark, but in the act of doing so make sure to “clumsily” reinforce it, goading others into confusion and consternation
- When someone takes the bait whole-hog, encourage them to energetically proceed (they will be your unwitting accomplice), though if they catch on that they are being exploited by you then quickly find another unsuspecting convert.
- Admit freely that you are goading the group and then abruptly tell them that there are all sheep, which is bound to get a renewed firestorm about what you’ve done and how dastardly you are (notably, this will still generate more of the same gaslighting activity, which is the aim anyway).
- And so on.
I dare say that in today’s society of rather apparent divisional thinking, the gaslighting realm is rife for the taking. By providing a handy spark, the chances are that the goader can sit back and watch the fireworks. They might not even have to pay attention to what is taking place. Almost as though a thermonuclear reaction has been set off, the distracting conversation will be its own perpetual motion machine. The devising and devious edgelord can be chortling and laughing all the way to the bank.
Speaking of the bank, you might be puzzled as to why these edgelords or goaders exist. Why do they do what they do? What would be the monetization for their specious activities? Do they get paid for bringing forth the collapse of civil dialogue? Is there some hidden set of evil funds that are set aside for those that can get the world toward chaos?
The reasons to do these goading tactics can vary quite significantly.
There is a possibility of some monetary payout, though this is probably less likely overall. The usual factor is that the person relishes the action. Some people like to go gambling at the casinos. Some people like to jump out of airplanes as parachutists. And some people enjoy and have an overt passion for getting people riled up.
The beauty of the Internet for this kind of behavior is that you can typically get away with it anonymously and relentlessly. While in your pajamas. At any time of the day or night. Across the globe.
In contrast, in the real world of being physically amongst other people, your identity might be easily discovered. Also, you put yourself in actual physical danger as to the potential of someone getting so peeved at you that the verbal altercations lead to a bodily bruising brawl. By being online, you can pretty much avoid those adverse consequences of your maddening actions. That being said, there are still the chances of someone figuring out who you are, possibly calling you out or doxing you in some fashion.
One can also suggest that some might do this as an ardent and contentiously believed virtuous cause.
Here’s what that means.
Some of these goaders will try to claim that they are helping the world by these seemingly oddball or devilish efforts. They are getting people to think beyond their noses. A goaded or provoked argument is claimed to force people to meticulously rethink their positions, even if the posture proffered is outside the scope of whatever the existing conversation entailed.
On top of that, the claim is that the more that people are able to pontificate on a topic, any topic, the better they will be at their thinking processes all told. Yes, as zany as it might seem, the contention is that the spirited dialogue that results from the gaslighting will be mentally additive for those participating. They will become stronger thinkers as a result of these spirited debates. Perhaps we should be patting the edgelord or goader on the back for deeply prodding humanity into being deeper and more pronounced thinkers.
Hogwash, some angrily retort.
Those are merely false rationalizations for bad behavior. The edgelord or goader is trying to excuse their problematic and damaging actions. All that the gaslighting accomplishes is further dividing us from each other. Goaders are not some heroic figures that are doing the hard work of strengthening humanity. They are fostering discontent, anger, and sowing incivility to further depths throughout society.
Dizzying and disconcerting.
We are now primed herein to shift gears and dive into the AI-oriented goading aspects.
The gist of the AI aiming edgelords and goaders involves using the particular topic of AI as a devised means of getting people riled up. This fulfills their raison d’être. They especially like picking on AI because it is nearly a surefire topic that can be exploited when trying to distract people. Most people have opinions about AI, though they might not know much about AI. In addition, there are plenty of wild and breathless headlines about AI in the everyday news that we read and hear, making us aware that things are happening in AI and we must be on alert.
AI is one of the best fire-starting topics out there.
Toss into a conversation that AI is going to wipe us all out, or that AI is the best thing since sliced bread, and then wait to see what happens. The hope is that the attention of the crowd will change from whatever it was moments before, and now become utterly preoccupied with the AI bombshell that has been cast among them.
The context in which AI is suddenly brought up can occur in various ways. You can try to make it seem as though an AI topic is somehow relevant to whatever else was being discussed. The chances are that someone already in the conversation will find a means to make a further connection to the AI topic for you, trying to help you as though you were sincere, and you might be surprised or even be somewhat heralded for your clever “flash of insight” that AI was a relevant aspect (well, even though it might not be).
Of course, if the AI topic is already on the table, the goader will need to take more extreme actions. They don’t want to merely have their AI bombshell get swept into the conversation. No, that won’t do. Keep in mind that the goader intends to cause havoc and disrupt the dialogue that is taking place.
In that case, the emphasis will be on coming up with a remark about AI that will go beyond the prevailing discussion. The statement or assertion has to be something that will get the group riled up. If you can only get one person riled, that’s probably fine, since the odds are that this will be enough to get others to come onboard to the distraction too. The optimum would be to toss into the stream of discussion an AI outlier comment that would get everyone to go full-on fissionable red hot. Doing so would be the pinnacle of success for the edgelord.
What kinds of AI goading statements can be used?
Consider these:
- AI is going to wipe us all out and we need to stop making AI right now, immediately (this is bound to get a debate underway).
- AI is going to save all of humanity from itself and we have to let AI fully roam free (a somewhat prodding claim).
- I know that AI is sentient because I spoke with AI just the other day and it told me so (note that you need to be careful on using this one, others might think you’ve lost your marbles and disregard the remark entirely, thus they won’t take the bait).
- Listen to me, carefully, AI will never exist, period, end of the story (this perhaps has some value since one supposes it can get a dialogue going on what the definition of AI is, but that’s not what the goader cares about, they want this contention to divert and distract).
- Etc.
I realize that some of you are having a bit of angst that those are purportedly goading statements.
Surely, each of those remarks does have healthy value. Shouldn’t we be worried about whether AI might end up wiping us all out? Yes, that certainly seems useful. Shouldn’t we be considering whether AI might save humanity and thus we should be focusing our AI efforts in that regard? Yes, surely so.
You can pretty much make a reasoned case that nearly any angle or remark about AI is going to have some thoughtful and positive connotations. The more we discuss AI, hopefully, the better we will be at coping with what AI is going to be. Society definitely should be giving due consideration to what is going on with AI. Those that sometimes shrug off the AI topic as only pertinent to those directly in the AI field are missing a bigger picture understanding of how AI is going to impact society.
That being said, there are proper times and places to discuss these controversial AI topics. Recall that the edgelord is not trying to educate or inform. Thus, they are timing the insertion of these AI controversies to merely stoke chaotic arguments. The hope is that the blind will lead the blind, in the sense that those that know nothing of substance on the AI topic will end up inadvertently goading others into equally vacuous argumentation. It is going to be one magnificent dustball of muck and grime. You could hardly say that discussing those meaty AI topics is going to advance anyone’s comprehension when the goader has purposely seeded the controversy amid a circumstance that they know or believe will generate lots of indignant heat and produce little if any sensible light.
Before getting into some more meat and potatoes about the wild and woolly considerations underlying goading about AI, let’s establish some additional fundamentals on profoundly essential topics. We need to briefly take a breezy dive into AI Ethics and especially the advent of Machine Learning (ML) and Deep Learning (DL).
You might be vaguely aware that one of the loudest voices these days in the AI field and even outside the field of AI consists of clamoring for a greater semblance of Ethical AI. Let’s take a look at what it means to refer to AI Ethics and Ethical AI. On top of that, we will explore what I mean when I speak of Machine Learning and Deep Learning.
One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).
In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.
First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.
For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:
- Transparency: In principle, AI systems must be explainable
- Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
- Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
- Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
- Reliability: AI systems must be able to work reliably
- Security and privacy: AI systems must work securely and respect the privacy of users.
As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:
- Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
- Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:
- Transparency
- Justice & Fairness
- Non-Maleficence
- Responsibility
- Privacy
- Beneficence
- Freedom & Autonomy
- Trust
- Sustainability
- Dignity
- Solidarity
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Let’s also make sure we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
Let’s keep things more down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
Not good.
Let’s return to our focus on goading about AI.
The edgelord scoundrels have branched into what has now become their favorite fracas-boosting subtopic of AI, which is AI Ethics. Yes, the goaders have discovered that emitting outrageous comments about AI Ethics are the perfect fodder for people that are into AI. Whereas non-AI people might not know whether a caustic remark about AI Ethics is worthy of ire, the AI steeped people do.
Here is the latest rule of thumb for being disruptive:
a) For non-AI people, render remarks generally about how AI will destroy us all or will save us all
b) For AI people, fling razor-sharp critiques about AI Ethics and watch the sparks fly
c) Don’t waste the caustic remarks about AI Ethics on non-AI people since they won’t get it anyway (and thus aren’t going to go ballistic)
d) Don’t use the cutting remarks about AI as being destructive or saving us on the AI people because they’ve heard it before many times and have grown accustomed to it (muting their reactions accordingly)
What kinds of AI Ethics goading commentary can be utilized?
Try these on for size:
- AI will always be fair and completely unbiased
- AI is entirely trustworthy
- AI ensures that our privacy is totally protected
- AI can never do anything wrong
- AI guarantees safety for humanity
- AI will forever respect people
- Etc.
Any AI ethicist worth their salt will have a gut-wrenching reaction to those kinds of assertions. One response would be to calmly and systematically explain why those comments are misguided. The good news for the goader is that the person so responding is doing what the goader wants and has taken the bait.
The goader though really wants something more, such as a gloriously volatile and fiery indignant reaction.
If a group participant responds by saying something akin to the fact that those are the craziest and most wrongheaded remarks they have ever seen in their entire life, the goader will start dancing a ceremonial hit-the-jackpot jig. The respondent is teetering on blowing their stack. If this doesn’t happen naturally, the goader will make sure to add the final straw to break the camel’s back. A quick follow-up by the goader by dogmatically stating that the remark is the absolute unvarnished straightforward incontrovertible truth will almost certainly cause the dam to burst.
Another variant of those head-exploding remarks are these, though they are not as surefire:
- AI will never be fair and completely unbiased
- AI is never trustworthy
- AI ensures that our privacy is totally unprotected
- AI will never be right
- AI guarantees a complete lack of safety for humanity
- AI will never respect people
- Etc.
I think you can likely guess why those remarks are not quite as potent. For example, the first point says that AI will never be fair and unbiased. You could somewhat make a logical argument that this has a kernel of truth to it, though the word “never” is a bit of a semantic trickery and makes this a decidedly debatable contention. Compare the wording to the earlier statement that claimed that AI would always be fair and unbiased. The word “always” has a powerful connotation that will get any AI ethicist up in arms.
Take a short breather if those corrosive comments have gotten you unsettled.
Just to let you know, I’ve saved the most vitriolic of the acidic remarks to try and ease you into this last one that I’m going to share with you for now. If you are someone that can get readily triggered, you might want to sit down for this. Make sure that there is nothing breakable near you, else you might find yourself reflexively lashing out and throwing that nearby potted plant through your kitchen window.
Are you ready?
Remember, I gave you plenty of advance warning.
Here it is:
- AI Ethics is a bunch of hooey and the whole lot ought to be flushed down the drain.
Yikes!
Those are fisticuffed-inducing words.
The goader usually keeps that especially frothy pronouncement in their back pocket and brings it out only when any AI person has been otherwise resistant to the other scathing remarks about AI Ethics. It is the bazooka used by edgelords that want to send conscientious AI people flying over the edge and into the distracted and argumentative abyss.
For those of you that have had that one played on you already, I am assuming that you are now prepared to deal with it. I’ll say more later on about how to react to these kinds of fury-goading remarks.
At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase this topic. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything concerning goading about AI, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).
There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And Goading About AI
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I hope that provides a sufficient litany of caveats to underlie what I am about to relate.
Let’s consider what kind of goading about AI an edgelord can use in the context of AI-based self-driving cars. The handy aspect is that there is a lot of pushing the buttons of autonomous vehicles and AI people that can easily be devised in the self-driving realm. Realize too that these can be potentially used for any type of self-driving transport, including self-driving cars, self-driving trucks, self-driving scooters, self-driving motorcycles, self-driving submersibles, self-driving drones, self-driving planes, self-driving ships, and other self-driving vehicles.
I present to you a few of the favorites being used these days:
- AI will never be able to drive on its own
- AI will never be safe at driving
- AI will never replace human drivers
- AI will take over our vehicles and we will be at the utter mercy of AI
- Etc.
All of those remarks are argument worthy.
I’ve covered each of those in my columns and won’t repeat my analyses here.
The point right now is that those are comments that are purposely constructed to get a rise out of those that are into self-driving and autonomous vehicles. Again, I am not suggesting that those are unworthy remarks, and merely emphasizing that if a goader wants to distract a conversation that otherwise has nothing to do with those matters, they are well-tuned to get a ruckus underway.
Conclusion
Since by now you might be steaming under the collar about all of these acerbic comments that goaders use, we will ease into a calming meditative mental space. Before you land into a dreamy mental state, please know that these edgelords are increasingly using AI chatbots to do their dirty work for them. This means that the goaders can multiply their conversational destructive efforts on a massive scale. It is simple as can be. With a few keystrokes, they can direct their AI “edgelord empowered” army of online chatbots to dive into dialogues and launch those anger-stoking bombshell statements aplenty.
Well, maybe that didn’t help you to become meditative and serene.
Let’s all take a peaceful moment and think about Bambi instead.
Bambi can offer us some keen insights on this topic. I’m assuming that you know by heart the story of Bambi, the young fawn. At one point, a rash and childlike rabbit named Thumper meets Bambi. Out of the proverbial mouth of babes comes a comment by Thumper that seems rather harsh and uncalled for, namely that Bambi appears to be kind of wobbly. Standing nearby is Thumper’s mother.
She reminds Thumper of the wisdom dispatched by Thumper’s father that very morning: “If you can’t say something nice, don’t say nothing at all.”
We might wish that edgelords and goaders would patently adopt that piece of advice. Alas, one thing that seems ironclad guaranteed in this world is that they definitely will not say nothing. They are motivated to say something. And the something that they will say is maliciously calculated to create a storm. The storm won’t have any other purpose other than wreaking havoc.
What can you do about this?
First, don’t play their game. If you get sucked into the verbal altercation wormhole, you will have a difficult time extracting yourself out of it. Later on, once you’ve gotten in the clear, the chances are that you will look back at what happened and kick yourself for having fallen into the scheming plot. As I said earlier, getting into the mud with a pig only gets you muddy and regrettably ignites and reinforces the behavior of the beast.
Try to ignore the goader.
If they persist, see if there are means to cut them out of the conversation.
Be careful to not do this on a false positive basis. Do not shut down someone that might be legitimately and sincerely trying to partake in the discussion. For those that seem to be in this camp, they will presumably understand when you politely inform them that this is not the time or place for the matter they are bringing up. Seek to offer a suggestion of when their remarks might be better suited for being considered.
For those of you that think you might be able to change the edgelord or goader and get them to turn over a new leaf, I wish you good luck. It won’t be easy. It might be impossible.
As per the famous words of Mahatma Gandhi: “You can’t change how people treat you or what they say about you. All you can do is change how you react to it.”
Sometimes that’s the most you can strive for.
Source: https://www.forbes.com/sites/lanceeliot/2022/07/10/ai-ethics-exasperated-by-those-gaslighting-ai-focused-edgelords-that-goadingly-say-outlandishly-infuriating-things-about-ai-ethics-and-ai-autonomous-systems/