Here’s Why Businesses Are Having A Tumultuous Love-Hate Relationship With AI Ethics Boards

Should a business establish an AI Ethics advisory board?

You might be surprised to know that this is not an easy yes-or-no answer.

Before I get into the complexities underlying the pros and cons of putting in place an AI Ethics advisory board, let’s make sure we are all on the same page as to what an AI Ethics advisory board consists of and why it has risen to headline-level prominence.

As everyone knows, Artificial Intelligence (AI) and the practical use of AI for business activities have gone through the roof as a must-have for modern-day companies. You would be hard-pressed to argue otherwise. To some degree, the infusion of AI has made products and services better, plus at times led to lower costs associated with providing said products and services. A nifty list of efficiencies and effectiveness boosts can be potentially attributed to the sensible and appropriate application of AI. In short, the addition or augmenting of what you do by incorporating AI can be a quite profitable proposition.

There is also the shall we say big splash that comes with adding AI into your corporate endeavors.

Businesses are loud and proud about their use of AI. If the AI just so happens to also improve your wares, that’s great. Meanwhile, claims of using AI are sufficiently attention-grabbing that you can pretty much be doing the same things you did before, yet garner a lot more bucks or eyeballs by tossing around the banner of AI as being part of your business strategy and out-the-door goods.

That last point about sometimes fudging a bit about whether AI is really being used gets us edging into the arena of AI Ethics. There is all manner of outright false claims being made about AI by businesses. Worse still, perhaps, consists of using AI that turns out to be the so-called AI For Bad.

For example, you’ve undoubtedly read about the many instances of AI systems using Machine Learning (ML) or Deep Learning (DL) that have ingrained racial biases, gender biases, and other undue improper discriminatory practices. For my ongoing and extensive coverage of these matters relating to adverse AI and the emergence of clamoring calls for AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

So, we have these sour drivers hidden within the seemingly all-rosy AI-use by businesses:

  • Hollow claims of using AI when in fact there is no AI or insignificant AI infusion
  • False claims about AI use that are intentionally devised to mislead
  • Inadvertent inclusion of AI that turns out to imbue improper biases and is discriminatory
  • Purposefully shaped AI to promulgate bad biases and despicable discriminatory actions
  • Other

How do these kinds of thoughtless or disgraceful practices arise in companies?

One notable piece of the puzzle is a lack of AI Ethics awareness.

Top executives might be unaware of the very notion of devising AI that abides by a set of Ethical AI precepts. The AI developers in such a firm might have some awareness of the matter, though perhaps they are only familiar with AI Ethics theories and do not know how to bridge the gap in day-to-day AI development endeavors. There is also the circumstance of AI developers that want to embrace AI Ethics but then get a strong pushback when managers and executives believe that this will slow down their AI projects and bump up the costs of devising AI.

A lot of top executives do not realize that a lack of adhering to AI Ethics is likely to end up kicking them and the company in their posterior upon the release of AI which is replete with thorny and altogether ugly issues. A firm can get caught with bad AI in its midst that then woefully undermines the otherwise long-time built-up reputation of the firm (reputational risk). Customers might choose to no longer use the company’s products and services (customer loss risk). Competitors might capitalize on this failure (competitive risk). And there are lots of attorneys ready to aid those that have been transgressed, aiming to file hefty lawsuits against firms that have allowed rotten AI into their company wares (legal risk).

In brief, the ROI (return on investment) for making suitable use of AI Ethics is almost certainly more beneficial than in comparison to the downstream costs associated with sitting atop a stench of bad AI that should not have been devised nor released.

Turns out that not everyone has gotten that memo, so to speak.

AI Ethics is only gradually gaining traction.

Some believe that inevitably the long arm of the law might be needed to further inspire the adoption of Ethical AI approaches.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have distinctive laws to govern various development and uses of AI. New laws are indeed being bandied around at the international, federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a measured one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here and the link here.

Let’s make sure we are all on the same page about what the basics of AI Ethics contain.

In my column coverage, I’ve previously discussed various collective analyses of AI Ethics principles, such as this assessment at the link here, which proffers a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. It takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

A means of trying to introduce and keep sustained attention regarding the use of AI Ethics precepts can be partially undertaken via establishing an AI Ethics advisory board.

We will unpack the AI Ethics advisory board facets next.

AI Ethics Boards And How To Do Them Right

Companies can be at various stages of AI adoption, and likewise at differing stages of embracing AI Ethics.

Envision a company that wants to get going on AI Ethics embracement but isn’t sure how to do so. Another scenario might be a firm that already has dabbled with AI Ethics but seems unsure of what needs to be done in furtherance of the effort. A third scenario could be a firm that has been actively devising and using AI and internally has done a lot to embody AI Ethics, though they realize that there is a chance that they are missing out on other insights perhaps due to internal groupthink.

For any of those scenarios, putting in place an AI Ethics advisory board might be prudent.

The notion is rather straightforward (well, to clarify, the overall notion is the proverbial tip of the iceberg and the devil is most certainly in the details, as we will momentarily cover).

An AI Ethics advisory board typically consists of primarily external advisors that are asked to serve on a special advisory board or committee for the firm. There might also be some internal participants included in the board, though usually the idea is to garner advisors from outside the firm and that can bring a semi-independent perspective to what the company is doing.

I say semi-independent since there are undoubtedly going to be some potential independence conflicts that can arise with the chosen members of the AI Ethics advisory board. If the firm is paying the advisors, it raises the obvious question of whether the paid members feel reliant on the firm for a paycheck or that they might be uneasy criticizing the gift horse they have in hand. On the other hand, businesses are used to making use of outside paid advisors for all manner of considered independent opinions, so this is somewhat customary and expected anyway.

The AI Ethics advisory board is usually asked to meet periodically, either in-person or on a virtual remote basis. They are used as a sounding board by the firm. The odds are too that the members are being provided with various internal documents, reports, and memos about the efforts afoot related to AI at the firm. Particular members of the AI Ethics advisor board might be asked to attend internal meetings as befitting their specific expertise. Etc.

Besides being able to see what is going on with AI within the firm and provide fresh eyes, the AI Ethics advisory board usually has a dual role of being an outside-to-inside purveyor of the latest in AI and Ethical AI. Internal resources might not have the time to dig into what is happening outside of the firm and ergo can get keenly focused and tailored state-of-the-art viewpoints from the AI Ethics advisory board members.

There are also the inside-to-outside uses of an AI Ethics advisory board too.

This can be tricky.

The concept is that the AI Ethics advisory board is utilized to let the outside world know what the firm is doing when it comes to AI and AI Ethics. This can be handy as a means of bolstering the reputation of the firm. The AI-infused products and services might be perceived as more trustworthy due to the golden seal of approval from the AI Ethics advisory board. In addition, calls for the firm to be doing more about Ethical AI can be somewhat blunted by pointing out that an AI Ethics advisory board is already being utilized by the company.

Questions that usually are brought to an AI Ethics advisory board by the firm utilizing such a mechanism often include:

  • Should the firm be using AI for a particular product or service, or does that seem overly troubling?
  • Is the firm taking into account the full range of AI Ethics considerations in their AI efforts?
  • Has the firm fallen into groupthink and become unwilling or unable to see potentially disturbing AI Ethics downfalls awaiting these efforts?
  • What kinds of latest approaches to AI Ethics ought the firm be seeking to adopt?
  • Would it be feasible to proffer external acclaim for our AI Ethics efforts and the commitment thereto?
  • Other

Tapping into an AI Ethics advisory board assuredly makes sense and firms have been increasingly marching down this path.

Please be aware that there is another side to this coin.

On one side of the coin, AI Ethics advisory boards can be the next best thing since sliced bread. Do not neglect the other side of the coin, namely they can also be a monumental headache and you might regret that you veered into this dicey territory (as you’ll see in this discussion, the downsides can be managed, if you know what you are doing).

Companies are beginning to realize that they can find themselves in a bit of a pickle when opting to go the AI Ethics advisory board route. You could assert that this machination is somewhat akin to playing with fire. You see, fire is a very powerful element that you can use to cook meals, protect you from predators whilst in the wilderness, keep you warm, bring forth light, and provide a slew of handy and vital benefits.

Fire can also get you burned if you aren’t able to handle it well.

There have been various news headlines of recent note that vividly demonstrate the potential perils of having an AI Ethics advisory board. If a member summarily decides that they no longer believe that the firm is doing the right Ethical AI activities, the disgruntled member might quit in a huge huff. Assuming that the person is likely to be well-known in the AI field or industry all-told, their jumping ship is bound to catch widespread media attention.

A firm then has to go on the defense.

Why did the member leave?

What is the company nefariously up to?

Some firms require that the members of the AI Ethics advisory board sign NDAs (non-disclosure agreements), which seemingly will protect the firm if the member decides to go “rogue” and trash the company. The problem though is that even if the person remains relatively silent, there is nonetheless a likely acknowledgment that they no longer serve on the AI Ethics advisory board. This, by itself, will raise all kinds of eyebrow-raising questions.

Furthermore, even if an NDA exists, sometimes the member will try to skirt around the provisions. This might include referring to unnamed wink-wink generic “case studies” to highlight AI Ethics anomalies that they believe the firm insidiously was performing.

The fallen member might be fully brazen and come out directly naming their concerns about the company. Whether this is a clear-cut violation of the NDA is somewhat perhaps less crucial than the fact that the word is being spread of Ethical AI qualms. A firm that tries to sue the member for breach of the NDA can brutally bring hot water onto themselves, stoking added attention to the dispute and appearing to be the classic David versus Goliath duel (the firm being the large “monster”).

Some top execs assume that they can simply reach a financial settlement with any member of the AI Ethics advisory board that feels the firm is doing the wrong things including ignoring or downplaying voiced concerns.

This might not be as easy as one assumes.

Oftentimes, the members are devoutly ethically minded and will not readily back down from what they perceive to be an ethical right-versus-wrong fight. They might also be otherwise financially stable and not willing to shave their ethical precepts or they might have other employment that remains untouched by their having left the AI Ethics advisory board.

As might be evident, some later realize that an AI Ethics advisory board is a dual-edged sword. There is a tremendous value and important insight that such a group can convey. At the same time, you are playing with fire. It could be that a member or members decide they no longer believe that the firm is doing credible Ethical AI work. In the news have been indications of at times an entire AI Ethics advisory board quitting together, all at once, or having some preponderance of the members announcing they are leaving.

Be ready for the good and the problems that can arise with AI Ethics advisory boards.

Of course, there are times that companies are in fact not doing the proper things when it comes to AI Ethics.

Therefore, we would hope and expect that an AI Ethics advisory board at that firm would step up to make this known, presumably internally within the firm first. If the firm continues on the perceived bad path, the members would certainly seem ethically bound (possibly legally too) to take other action as they believe is appropriate (members should consult their personal attorney for any such legal advice). It could be that this is the only way to get the company to change its ways. A drastic action by a member or set of members might seem to be the last resort that the members hope will turn the tide. In addition, those members likely do not want to be part of something that they ardently believe has gone astray from AI Ethics.

A useful way to consider these possibilities is this:

  • The firm is straying, member opts to exit due to a perceived lack of firm compliance
  • The firm is not straying, but member believes the firm is and thus exits due to a perceived lack of compliance

The outside world won’t necessarily know whether the member that exits has a bona fide basis for concern about the firm or whether it might be some idiosyncratic or misimpression by the member. There is also the rather straightforward possibility of a member leaving the group due to other commitments or for personal reasons that have nothing to do with what the firm is doing.

The gist is that it is important for any firm adopting an AI Ethics advisory board to mindfully think through the entire range of life cycle phases associated with the group.

With all that talk of problematic aspects, I don’t want to convey the impression of staying clear of having an AI Ethics advisory board. That is not the message. The real gist is to have an AI Ethics advisory board and make sure you do so the right way. Make that into your cherished mantra.

Here are some of the oft-mentioned benefits of an AI Ethics advisory board:

  • Have at hand a means to bounce AI projects and ideas off of a semi-independent private group
  • Leverage expertise in AI Ethics that is from outside of the firm
  • Aim to avoid AI Ethics guffaws and outright disasters by the firm
  • Be a public relations booster for the firm and its AI systems
  • Breakout of internal groupthink on AI and AI Ethics
  • Garner a fresh look at AI innovations and their practicality
  • Enhance the standing and stature of the firm
  • Serve as an unbridled voice for when the firm AI efforts are akilter
  • Other

Here are common ways that firms mess up and undercut their AI Ethics advisory board (don’t do this!):

  • Provide vague and confusing direction as to mission and purpose
  • Only sparingly consulted and often ill-timed after the horse is already out of the barn
  • Kept in the dark
  • Fed heavily filtered info that provides a misleading portrayal of things
  • Used solely as a showcase and for no other value-producing aim
  • Not permitted to do any semblance of exploration about internal matters
  • Bereft of sufficient resources to perform their work sufficiently
  • Lack of explicit leadership within the group
  • Lack of attention by the leadership of the firm regarding the group
  • Expected to give blind approval to whatever is presented
  • Haphazard as to members chosen
  • Treated with little respect and seemingly a mere checkmark
  • Other

Another frequently confounding problem involves the nature and demeanor of the various members that are serving on an AI Ethics advisory board, which can sometimes be problematic in these ways:

  • Some members might be only AI Ethics conceptualizers rather than versed in AI Ethics as a practice and as such provide minimalistic business-savvy insights
  • Some can be bombastic when it comes to AI Ethics and are extraordinarily difficult to deal with throughout their participation
  • Infighting can become a significant distractor, often clashes of large egos, and have the group devolve into being dysfunctional
  • Some might be overly busy and overcommitted such that they are aloof from the AI Ethics advisory effort
  • Some have a deeply held unwavering opinion about AI Ethics that is inflexible and unrealistic
  • Some are prone to emotional rather than analytical and systemic considerations underlying AI Ethics
  • Can be akin to the famous adage of being like a herd of cats that won’t focus and wanders aimlessly
  • Other

Some firms just seem to toss together an AI Ethics advisory board on a somewhat willy-nilly basis. No thought goes toward the members to be selected. No thought goes toward what they each bring to the table. No thought goes toward the frequency of meetings and how the meetings are to be conducted. No thought goes toward running the AI Ethics advisory board, all told. Etc.

In a sense, by your own lack of resourcefulness, you are likely putting a train wreck in motion.

Don’t do that.

Perhaps this list of the right things to do is now ostensibly obvious to you based on the discourse so far, but you would be perhaps shocked to know that few firms seem to get this right:

  • Explicitly identify the mission and purpose of the AI Ethics advisory board
  • Ensure that the group will be given appropriate top exec level attention
  • Identify the type of members that would best be suited for the group
  • Approach the desired members and ascertain the fit for the group
  • Make suitable arrangements with the chosen members
  • Establish the logistics of meetings, frequency, etc.
  • Determine duties of the members, scope, and depth
  • Anticipate the internal resources needed for aiding the group
  • Allocate sufficient resources for the group itself
  • Keep the AI Ethics advisory board active and looped in
  • Have escalations preplanned for when concerns arise
  • Indicate how emergency or crisis-oriented occurrences will be handled
  • Rotate members out or in as needed for keeping the mix suitable
  • Beforehand have anticipated exit paths for members
  • Other

Conclusion

A few years ago, many of the automakers and self-driving tech firms that are embarking upon devising AI-based self-driving cars were suddenly prompted into action to adopt AI Ethics advisory boards. Until that point in time, there had seemed to be little awareness of having such a group. It was assumed that the internal focus on Ethical AI would be sufficient.

I’ve discussed at length in my column the various unfortunate AI Ethics lapses or oversights that have at times led to self-driving car issues such as minor vehicular mishaps, overt car collisions, and other calamities, see my coverage at the link here. The importance of AI safety and like protections has to be the topmost consideration for those making autonomous vehicles. AI Ethics advisory boards in this niche are helping to keep AI safety a vital top-of-mind priority.

My favorite way to express this kind of revelation about AI Ethics is to liken the matter to earthquakes.

Californians are subject to earthquakes from time to time, sometimes rather hefty ones. You might think that being earthquake prepared would be an ever-present consideration. Not so. The cycle works this way. A substantive earthquake happens and people get reminded of being earthquake prepared. For a short while, there is a rush to undertake such preparations. After a while, the attention to this wanes. The preparations fall by the wayside or are otherwise neglected. Boom, another earthquake hits, and all those that should have been prepared are caught “unawares” as though they hadn’t realized that an earthquake could someday occur.

Firms often do somewhat the same about AI Ethics advisory boards.

They don’t start one and then suddenly, upon some catastrophe about their AI, they reactively are spurred into action. They flimsily start an AI Ethics advisory board. It has many of the troubles I’ve earlier cited herein. The AI Ethics advisory board falls apart. Oops, a new AI calamity within the firm reawakens the need for the AI Ethics advisory board.

Wash, rinse, and repeat.

Businesses definitely find that they sometimes have a love-hate relationship with their AI Ethics advisory board efforts. When it comes to doing things the right way, love is in the air. When it comes to doing things the wrong way, hate ferociously springs forth. Make sure you do what is necessary to keep the love going and avert the hate when it comes to establishing and maintaining an AI Ethics advisory board.

Let’s turn this into a love-love relationship.

Source: https://www.forbes.com/sites/lanceeliot/2022/08/08/heres-why-businesses-are-having-a-tumultuous-love-hate-relationship-with-ai-ethics-boards/