Responsible AI Relishes Preeminent Boost Via AI Ethics Proclamation By Top Professional Society The ACM

Did you see or hear the news?

Another set of AI Ethics precepts has been newly proclaimed.

Raucous applause, if you please.

Then again, you might not have noticed it due to the fact that so many other AI Ethics decrees have been floating around for a while now. Some are saying that the seemingly non-stop percolation of Ethical AI proclamations is becoming a bit numbing. How many do we need? Can anyone keep up with them all? Which one is the best? Are we perhaps going overboard on AI Ethics principles? And so on.

Well, in this particular case, I say that we ought to especially welcome this latest addition to the club.

I will insightfully explain why in a moment.

First, as clarification, I am referring to the AI Ethics precept set now known officially as “Statement On Principles For Responsible Algorithmic Systems” which was recently published by the ACM Technology Policy Council on October 26, 2022. Kudos go to the teams of experts that put this prized document together, including co-lead authors Jeanna Matthews (Clarkson University) and Ricardo Baeza-Yates (Universitat Pompeu Fabra).

Those of you in the know might upon close inspection realize that this document seems faintly familiar.

Good eye!

This latest incarnation is essentially an updated and expanded variant of the earlier joint “Statement On Algorithmic Transparency And Accountability” that was promulgated by the ACM US Technology Policy Committee and the ACM Europe Technology Policy Committee in 2017. Faithful readers of my columns might recall that I’ve from time to time mentioned the 2017 decree in my column coverage of key facets underlying AI Ethics and AI Law.

For my extensive and ongoing assessment and trending analyses of AI Ethics and AI Law, see the link here and the link here, just to name a few.

This latest statement by the ACM is notably important for several vital reasons.

Here’s why.

The ACM, which is a handy acronym for the Association for Computing Machinery, is considered the world’s largest computing-focused association. Comprising an estimated 110,000 or so members, the ACM is a longtime pioneer in the computing field. The ACM produces some of the topmost scholarly research in the computing field, and likewise provides professional networking and appeals to computing practitioners too. As such, the ACM is an important voice representing generally those that are high-tech and has strived enduringly to advance the computer field (the ACM was founded in 1947).

I might add a bit of a personal note on this too. When I first got into computers in high school, I joined the ACM and participated in their educational programs, especially the exciting chance to compete in their annual computer programming competition (such competitions are widely commonplace nowadays and labeled typically as hackathons). I remain involved in the ACM while in college via my local university chapter and got an opportunity to learn about leadership by becoming a student chapter officer. Upon entering industry, I joined a professional chapter and once again took on a leadership role. Later after this, when I became a professor, I served on ACM committees and editorial boards, along with sponsoring the campus student chapter. Even still today, I am active in the ACM, including serving on the ACM US Technology Policy Committee.

I relish the ACM endearing and enduring vision of life-long learning and career development.

In any case, in terms of the latest AI Ethics statement, the fact that this has been issued by the ACM carries some hefty weight to it. You might reasonably assert that the Ethical AI precepts are the totality or collective voice of a worldwide group of computing professionals. That says something right there.

There is also the aspect that others in the computer field will be inspired to perk up and take a listen in the sense of giving due consideration to what the statement declares by their fellow computing colleagues. Thus, even for those that aren’t in the ACM or do not know anything whatsoever about the revered group, there will be hopefully keen interest in discovering what the statement is about.

Meanwhile, those that are outside of the computing field might be drawn to the statement as a kind of behind-the-scenes insider look at what those into computers are saying about Ethical AI. I want to emphasize though that the statement is intended for everyone, not just those in the computer community, and therefore keep in mind that the AI Ethics precepts are across the board, as it were.

Finally, there is an added twist that few would consider.

Sometimes, outsiders perceive computing associations as being knee-deep in technology and not especially cognizant of the societal impacts of computers and AI. You might be tempted to assume that such professional entities only care about the latest and hottest breakthroughs in hardware or software. They are perceived by the public, in a simply stated roughshod manner, as being techie nerds.

To set the record straight, I’ve been immersed in the social impacts of computing since I first got into computers and likewise the ACM has been also deeply engaged on such topics too.

For anyone surprised that this statement about AI Ethics precepts has been put together and released by the ACM, they aren’t paying attention to the longstanding research and work taking place on these matters. I would also urge that those interested should take a good look at the ACM Code of Ethics, a strident professional ethics code that has evolved over the years and emphasizes that systems developers need to be aware of, abide by, and be vigilant about the ethical ramifications of their endeavors and wares.

AI has been stoking the fires on becoming informed about computing ethics.

The visibility of ethical and legal considerations in the computing field has risen tremendously with the emergence of today’s AI. Those within the profession are being informed and at times drummed about giving proper attention to AI Ethics and AI Law issues. Lawmakers are increasingly becoming aware of AI Ethics and AI Laws aspects. Companies are wising up to the notion that the AI they are devising or using is both advantageous and yet also at times opens enormous risks and potential downsides.

Let’s unpack what has been taking place in the last several years so that an appropriate context can be established before we jump into this latest set of AI Ethics precepts.

The Rising Awareness Of Ethical AI

The recent era of AI was initially viewed as being AI For Good, meaning that we could use AI for the betterment of humanity. On the heels of AI For Good came the realization that we are also immersed in AI For Bad. This includes AI that is devised or self-altered into being discriminatory and makes computational choices imbuing undue biases. Sometimes the AI is built that way, while in other instances it veers into that untoward territory.

I want to make abundantly sure that we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).

The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).

I’d strongly suggest that we keep things down to earth and consider today’s computational non-sentient AI.

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Be very careful of anthropomorphizing today’s AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now-hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern-matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems.

All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

I also recently examined the AI Bill of Rights which is the official title of the U.S. government official document entitled “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” that was the result of a year-long effort by the Office of Science and Technology Policy (OSTP). The OSTP is a federal entity that serves to advise the American President and the US Executive Office on various technological, scientific, and engineering aspects of national importance. In that sense, you can say that this AI Bill of Rights is a document approved by and endorsed by the existing U.S. White House.

In the AI Bill of Rights, there are five keystone categories:

  • Safe and effective systems
  • Algorithmic discrimination protections
  • Data privacy
  • Notice and explanation
  • Human alternatives, consideration, and fallback

I’ve carefully reviewed those precepts, see the link here.

Now that I’ve laid a helpful foundation on these related AI Ethics and AI Law topics, we are ready to jump into the recently released ACM “Statement On Principles For Responsible Algorithmic Systems” (by the way, since the document title refers to responsible algorithmic systems, you might want to take a look at my assessment of what it means to speak of Trustworthy AI, see the link here).

Get yourself ready for a journey into this latest set of AI Ethics principles.

Digging Intently Into The ACM Declared AI Ethics Precepts

The ACM pronouncement about Ethical AI consists of these nine keystones:

  • Legitimacy and competency
  • Minimizing harm
  • Security and privacy
  • Transparency
  • Interpretability and explainability
  • Maintainability
  • Contestability and auditability
  • Accountability and responsibility
  • Limiting environmental impacts

If you compare this latest set to other notably available sets, there is a great deal of similarity or akin correspondences amongst them.

On the one hand, you can take that as a good sign.

We might generally hope that the slew of AI Ethics principles hovering around is all coalescing toward the same overall coverage. Seeing that one set is somewhat comparable to another set gives you a semblance of confidence that these sets are within the same ballpark and not somehow out in a puzzling left field.

A potential complaint by some is that these various sets appear to be roughly the same, which then possibly creates confusion or at least consternation due to the qualm that we ought to not have numerous seemingly duplicative lists. Can’t there be just one list? The problem of course is that there is no simple way to get all such lists to uniformly be precisely the same. Different groups and different entities have approached this in differing ways. The good news is that they pretty much have all reached the same overarching conclusion. We can be relieved that the sets don’t have huge differences, which would perhaps make us uneasy if there wasn’t an overall consensus.

A contrarian might exhort that the commonality of these lists is disconcerting, arguing that maybe there is a groupthink going on. Perhaps all these disparate groups are thinking the same way and not able to look beyond the norm. All of us are falling into an identical trap. The lists are ostensibly anchoring our thinking and we aren’t able to see beyond our own noses.

Looking beyond our noses is undoubtedly a worthy cause.

I certainly am open to hearing what contrarians have to say. Sometimes they catch wind of something that has the Titanic heading toward a giant iceberg. We could use a few eagle-eye lookouts. But, in the matter of these AI Ethics precepts, there hasn’t been anything definitively articulated by contrarians that appears to patently undercut or raise worries about an undue commonality going on. I think we are doing okay.

In this ACM set, there are a few particularly notable or standout points that I think are particularly worthy of notable attention.

First, I like the top-level phrasing which is somewhat different than the norm.

For example, referring to legitimacy and competency (the first bulleted item) evokes a semblance of the importance of both designer and management competencies associated with AI. In addition, the legitimacy catchphrase ends up taking us into the AI Ethics and AI Law realm. I say this because many of the AI Ethics precepts concentrate almost entirely on the ethical implications but seem to omit or stray shy of noting the legal ramifications too. In the legal field, ethical considerations are often touted as being “soft law” while the laws on the books are construed as “hard laws” (meaning they carry the weight of the legal courts).

One of my favorite all-time sayings was uttered by the famous jurist Earl Warren: “In civilized life, law floats in a sea of ethics.”

We need to make sure that AI Ethics precepts also encompass and emphasize the hard-law side of things as in the drafting, enacting, and enforcement of AI Laws.

Secondly, I appreciate that the list includes contestability and auditability.

I’ve repeatedly written about the value of being able to contest or raise a red flag when you are subject to an AI system, see the link here. Furthermore, we are going to increasingly see new laws forcing AI systems to be audited, which I’ve discussed at length about the New York City (NYC) law on auditing biases of AI systems used for employee hiring and promotions, see the link here. Unfortunately, and as per my openly criticizing the NYC new law, if these auditability laws are flawed, they will probably create more problems than they solve.

Thirdly, there is a gradual awakening that AI can imbue sustainability issues and I am pleased to see that the environmental topic got a top-level billing in these AI Ethics precepts (see the last bullet of the list).

The act of creating an AI system can alone consume a lot of computing resources. Those computing resources can directly or indirectly be sustainability usurpers. There is a tradeoff to be considered as to the benefits that an AI provides versus the costs that come along with the AI. The last of the ACM bulleted items makes note of the sustainability and environmental considerations that arise with AI. For my coverage of AI-related carbon footprint issues, see the link here.

Now that we’ve done a sky-high look at the ACM list of AI Ethics precepts, we next put our toes more deeply into the waters.

Here are the official descriptions for each of the high-level AI Ethics precepts (quoted from the formal statement):

1. “Legitimacy and competency: Designers of algorithmic systems should have the management competence and explicit authorization to build and deploy such systems. They also need to have expertise in the application domain, a scientific basis for the systems’ intended use, and be widely regarded as socially legitimate by stakeholders impacted by the system. Legal and ethical assessments must be conducted to confirm that any risks introduced by the systems will be proportional to the problems being addressed, and that any benefit-harm trade-offs are understood by all relevant stakeholders.”

2. “Minimizing harm: Managers, designers, developers, users, and other stakeholders of algorithmic systems should be aware of the possible errors and biases involved in their design, implementation, and use, and the potential harm that a system can cause to individuals and society. Organizations should routinely perform impact assessments on systems they employ to determine whether the system could generate harm, especially discriminatory harm, and to apply appropriate mitigations. When possible, they should learn from measures of actual performance, not solely patterns of past decisions that may themselves have been discriminatory.”

3. “Security and privacy: Risk from malicious parties can be mitigated by introducing security and privacy best practices across every phase of the systems’ lifecycles, including robust controls to mitigate new vulnerabilities that arise in the context of algorithmic systems.”

4. “Transparency: System developers are encouraged to clearly document the way in which specific datasets, variables, and models were selected for development, training, validation, and testing, as well as the specific measures that were used to guarantee data and output quality. Systems should indicate their level of confidence in each output and humans should intervene when confidence is low. Developers also should document the approaches that were used to explore for potential biases. For systems with critical impact on life and well-being, independent verification and validation procedures should be required. Public scrutiny of the data and models provides maximum opportunity for correction. Developers thus should facilitate third-party testing in the public interest.”

5. “Interpretability and explainability: Managers of algorithmic systems are encouraged to produce information regarding both the procedures that the employed algorithms follow (interpretability) and the specific decisions that they make (explainability). Explainability may be just as important as accuracy, especially in public policy contexts or any environment in which there are concerns about how algorithms could be skewed to benefit one group over another without acknowledgement. It is important to distinguish between explanations and after-the-fact rationalizations that do not reflect the evidence or the decision-making process used to reach the conclusion being explained.”

6. “Maintainability: Evidence of all algorithmic systems’ soundness should be collected throughout their life cycles, including documentation of system requirements, the design or implementation of changes, test cases and results, and a log of errors found and fixed. Proper maintenance may require retraining systems with new training data and/or replacing the models employed.”

7. “Contestability and auditability: Regulators should encourage the adoption of mechanisms that enable individuals and groups to question outcomes and seek redress for adverse effects resulting from algorithmically informed decisions. Managers should ensure that data, models, algorithms, and decisions are recorded so that they can be audited and results replicated in cases where harm is suspected or alleged. Auditing strategies should be made public to enable individuals, public interest organizations, and researchers to review and recommend improvements.”

8. “Accountability and responsibility: Public and private bodies should be held accountable for decisions made by algorithms they use, even if it is not feasible to explain in detail how those algorithms produced their results. Such bodies should be responsible for entire systems as deployed in their specific contexts, not just for the individual parts that make up a given system. When problems in automated systems are detected, organizations responsible for deploying those systems should document the specific actions that they will take to remediate the problem and under what circumstances the use of such technologies should be suspended or terminated.”

9. “Limiting environmental impacts: Algorithmic systems should be engineered to report estimates of environmental impacts, including carbon emissions from both training and operational computations. AI systems should be designed to ensure that their carbon emissions are reasonable given the degree of accuracy required by the context in which they are deployed.”

I trust that you will give each of those crucial AI Ethics precepts a careful and erstwhile reading. Please do take them to heart.

Conclusion

There is a subtle but equally crucial portion of the ACM pronouncement that I believe many might inadvertently overlook. Let me make sure to bring this to your attention.

I am alluding to a portion that discusses the agonizing conundrum of having to weigh tradeoffs associated with the AI Ethics precepts. You see, most people often do a lot of mindless head nodding when reading Ethical AI principles and assume that all of the precepts are equal in weight, and all the precepts are going to always be given the same optimal semblance of deference and value.

Not in the real world.

Upon the rubber meets the road, any kind of AI that has even a modicum of complexity is going to nastily test the AI Ethics precepts as to some of the elements being sufficiently attainable over some of the other principles. I realize that you might be loudly exclaiming that all AI has to maximize on all of the AI Ethics precepts, but this is not especially realistic. If that’s the stand that you want to take, I dare say that you would likely need to tell most or nearly all of the AI makers and users to close up shop and put away AI altogether.

Compromises need to be made to get AI out the door. That being said, I am not advocating cutting corners that violate AI Ethics precepts, nor implying that they should violate AI Laws. A particular minimum has to be met, and above which the goal is to strive more so. In the end, a balance needs to be carefully judged. This balancing act has to be done mindfully, explicitly, lawfully, and with AI Ethics as a bona fide and sincerely held belief (you might want to see how companies are utilizing AI Ethics Boards to try and garner this solemn approach, see the link here).

Here are some bulleted points that the ACM declaration mentions on the tradeoffs complexities (quoted from the formal document):

  • “Solutions should be proportionate to the problem being solved, even if that affects complexity or cost (e.g., rejecting the use of public video surveillance for a simple prediction task).”
  • “A wide variety of performance metrics should be considered and may be weighted differently based on the application domain. For example, in some healthcare applications the effects of false negatives can be much worse than false positives, while in criminal justice the consequences of false positives (e.g., imprisoning an innocent person) can be much worse than false negatives. The most desirable operational system setup is rarely the one with maximum accuracy.”
  • “Concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified individuals, but they should not be used to justify limiting third-party scrutiny or to excuse developers from the obligation to acknowledge and repair errors.”
  • “Transparency must be paired with processes for accountability that enable stakeholders impacted by an algorithmic system to seek meaningful redress for harms done. Transparency should not be used to legitimize a system or to transfer responsibility to other parties.”
  • “When a system’s impact is high, a more explainable system may be preferable. In many cases, there is no trade-off between explainability and accuracy. In some contexts, however, incorrect explanations may be even worse than no explanation (e.g., in health systems, a symptom may correspond to many possible illnesses, not just one).”

Those that are developing or using AI might not overtly realize the tradeoffs they face. Top leaders of a firm might naively assume that the AI meets the maximums on all of the AI Ethics principles. They either believe this because they are clueless about the AI, or they want to believe this and are perhaps doing a wink-wink in order to readily adopt AI.

The odds are that failing to substantively and openly confront the tradeoffs will end up with an AI that is going to produce harm. Those harms will in turn likely open a firm to potentially large-scale liabilities. On top of that, conventional laws can come to bear for possible criminal acts associated with the AI, along with the newer AI-focused laws hammering on this too. A ton of bricks is waiting above the heads of those that think they can finagle their way around the tradeoffs or that are profoundly unaware that the tradeoffs exist (a crushing realization will inevitably fall upon them).

I’ll give the last word for now on this topic to the concluding aspect of the ACM pronouncement since I think it does a robust job of explaining what these Ethical AI precepts are macroscopically aiming to bring forth:

  • “The foregoing recommendations focus on the responsible design, development, and use of algorithmic systems; liability must be determined by law and public policy. The increasing power of algorithmic systems and their use in life-critical and consequential applications means that great care must be exercised in using them. These nine instrumental principles are meant to be inspirational in launching discussions, initiating research, and developing governance methods to bring benefits to a wide range of users, while promoting reliability, safety, and responsibility. In the end, it is the specific context that defines the correct design and use of an algorithmic system in collaboration with representatives of all impacted stakeholders” (quoted from the formal document).

As words of wisdom astutely tell us, a journey of a thousand miles begins with a first step.

I implore you to become familiar with AI Ethics and AI Law, taking whatever first step will get you underway, and then aid in carrying forward on these vital endeavors. The beauty is that we are still in the infancy of gleaning how to manage and societally cope with AI, thus, you are getting in on the ground floor and your efforts can demonstrably shape your future and the future for us all.

The AI journey has just begun and vital first steps are still underway.

Source: https://www.forbes.com/sites/lanceeliot/2022/11/27/responsible-ai-relishes-mighty-boost-via-ai-ethics-proclamation-rolled-out-by-esteemed-computing-profession-association-the-acm/