AI Ethics And The Looming Debacle When That New York City Law Requiring Audits For AI Biases Kicks Into Gear

Sometimes the best of intentions is lamentably dashed by a severe lack of attention to detail.

A prime example of this sage wisdom is worthy of exploring.

Specifically, let’s take a close look at a new law in New York City regarding Artificial Intelligence (AI) that will take effect on January 1, 2023. You could easily win a sizable bet that all manner of confusion, consternation, and troubles will arise once the law comes into force. Though the troubles are not by design, they will indubitably occur as a result of a poor design or at least an insufficient stipulation of necessary details that should and could have easily been devised and explicitly stated.

I’m referring to a local law passed last year on December 11, 2021, in the revered city of New York that is scheduled to go into action at the start of 2023. We are currently only a few months away from the grand awakening that this new law is going to stir. I wish that I could say that the ambitious law is going to seamlessly do what it is supposed to do, namely deal with potential AI biases in the realm of making employment decisions. Alas, though the intention is laudable, I will walk you through the gaping loopholes, omissions, and lack of specificity that will undercut this law and drive employers crazy as they seek to cope with the unintended yet quite adverse repercussions thereof.

You might say that this is the classic issue of pushing ahead with a half-baked plan. A revered maxim attributed to Dwight Eisenhower was that a plan is nothing while planning is everything. In short, this particular law is going to provide a vivid example of how lawmakers can sometimes fall short by failing to think through beforehand the necessary particulars so that the law meets its commendable goals and can be adopted in assuredly reasonable and prudent ways.

A debacle awaits.

Excuses are already being lined up.

Some pundits have said that you can never fully specify a law and have to see it in action to know what aspects of the law need to be tweaked (a general truism that is being twisted out of proportion in this instance). Furthermore, they heatedly argue that this is notably the case when it comes to the emerging newness of AI-related laws. Heck, they exhort, AI is high-tech wizardry that we don’t know much about as lawmakers, thusly, the logic goes that having something put into the legal pages is better than having nothing there at all.

On the surface, that certainly sounds persuasive. Dig deeper though and you realize it is potentially hooey, including and particularly in the case of this specific law. This law could readily be more adroitly and judiciously stipulated. We don’t need magic potions. We don’t need to wait until shambles arise. At the time the law was crafted, the right kind of wording and details could have been established.

Let’s also make sure that the unseemly, floated idea that the adoption aspects could not be divined beforehand is painfully preposterous. It is legal mumbo-jumbo handwaving of the most vacuous kind. There is plenty of already known considerations about dealing with AI biases and conducting AI audits that could have readily been cooked into this law. The same can be said for any other jurisdiction contemplating establishing such a law. Do not be duped into believing that we must only resort to blindly throwing a legal dart into the wild winds and suffering anguish. A dollop of legal-minded thinking combined with a suitable understanding of AI is already feasible and there is no need to grasp solely at straws.

I might add, there is still time to get this righted. The clock is still ticking. It might be possible to awaken before the alarm bells start ringing. The needed advisement can be derived and made known. Time is short so this has to be given due priority.

In any case, please make sure that you are grasping the emphasis here.

Allow me to fervently clarify that such a law concerning AI biases does have merit. I’ll explain why momentarily. I will also describe what problems there are with this new law that many would say is the first ever to be put onto the legal books (other variations exist, perhaps not quite like this one though).

Indeed, you can expect that similar laws will be gradually coming into existence all across the country. One notable concern is that if this New York City first-mover attempt goes badly, it could cause the rest of the country to be wary of enacting such laws. That isn’t the right lesson to be learned. The correct lesson is that if you are going to write such a law, do so sensibly and with due consideration.

Laws tossed onto the books without adequate vetting can be quite upsetting and create all manner of downstream difficulties. In that sense of things, please do not toss the baby out with the bathwater (an old saying, probably ought to be retired). The gist is that such laws can be genuinely productive and protective when rightly composed.

This particular one is unfortunately not going to do so out the gate.

All kinds of panicky guidance are bound to come from the enactors and enforcers of the law. Mark your calendars for late January and into February of 2023 to watch as the scramble ensues. Finger-pointing is going to be immensely intense.

No one is especially squawking right now because the law hasn’t landed yet on the heads of employers that will be getting zonked by the new law. Imagine that this is a metaphorically-speaking an earthquake of sorts that is set to take place in the opening weeks of 2023. Few are preparing for the earthquake. Many don’t even know that the earthquake is already plopped onto the calendar. All of that being said, once the earthquake happens, a lot of very astonished and shocked businesses will wonder what happened and why the mess had to occur.

All of this has notably significant AI Ethics implications and offers a handy window into lessons learned (even before all the lessons happen) when it comes to trying to legislate AI. For my ongoing and extensive coverage of AI Ethics, Ethical AI, along with AI Law amid the legal facets of AI governance can be found at the link here and the link here, just to name a few.

This legal tale of woe relates to erstwhile emerging concerns about today’s AI and especially the use of Machine Learning (ML) and Deep Learning (DL) as a form of technology and how it is being utilized. You see, there are uses of ML/DL that tend to involve having the AI be anthropomorphized by the public at large, believing or choosing to assume that the ML/DL is either sentient AI or near to (it is not). In addition, ML/DL can contain aspects of computational pattern matching that are undesirable or outright improper, or illegal from ethics or legal perspectives.

It might be useful to first clarify what I mean when referring to AI overall and also provide a brief overview of Machine Learning and Deep Learning. There is a great deal of confusion as to what Artificial Intelligence connotes. I would also like to introduce the precepts of AI Ethics to you, which will be especially integral to the remainder of this discourse.

Stating the Record About AI

Let’s make sure we are on the same page about the nature of today’s AI.

There isn’t any AI today that is sentient.

We don’t have this.

We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as The Singularity, see my coverage at the link here).

Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning and Deep Learning, which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.

Part of the issue is our tendency to anthropomorphize computers and especially AI. When a computer system or AI seems to act in ways that we associate with human behavior, there is a nearly overwhelming urge to ascribe human qualities to the system. It is a common mental trap that can grab hold of even the most intransigent skeptic about the chances of reaching sentience.

To some degree, that is why AI Ethics and Ethical AI is such a crucial topic.

The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including the assessment of how AI Ethics gets adopted by firms.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. They forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages. See for example my coverage at the link here.

In prior columns, I’ve covered the various national and international efforts to craft and enact laws regulating AI, see the link here, for example. I have also covered the various AI Ethics principles and guidelines that various nations have identified and adopted, including for example the United Nations effort such as the UNESCO set of AI Ethics that nearly 200 countries adopted, see the link here.

Here’s a helpful keystone list of Ethical AI criteria or characteristics regarding AI systems that I’ve previously closely explored:

  • Transparency
  • Justice & Fairness
  • Non-Maleficence
  • Responsibility
  • Privacy
  • Beneficence
  • Freedom & Autonomy
  • Trust
  • Sustainability
  • Dignity
  • Solidarity

Those AI Ethics principles are earnestly supposed to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI is subject to adhering to the AI Ethics notions. As prior emphasized herein, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.

Let’s keep things down to earth and focus on today’s computational non-sentient AI.

ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.

I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.

Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.

You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.

Not good.

I believe that I’ve now set the stage to sufficiently discuss the role of AI within the rubric of quiet quitting.

AI That Is Used In Employment Decision Making

The New York City law focuses on the topic of employment decision-making.

If you’ve lately tried to apply for a modern job nearly anywhere on this earth, you probably have encountered an AI-based element in the employment decision-making process. Of course, you might not know it is there since it could be hidden behind the scenes and you would have no ready way of discerning an AI system had been involved.

A common catchphrase used to refer to these AI systems is that they are considered Automated Employment Decision Tools, abbreviated as AEDT.

Let’s see how the NYC law defined these tools or apps that entail employment decision-making:

  • “The term ‘automated employment decision tool’ means any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons. The term ‘automated employment decision tool’ does not include a tool that does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons, including, but not limited to, a junk email filter, firewall, antivirus software, calculator, spreadsheet, database, data set, or other compilation of data” (NYC, Int 1894-2020, Subchapter 25, Section 20-870).

I’ll briefly examine this wording since it is vital to the entire nature and scope of the law.

First, as I’ve stated many times in my writings, one of the most difficult hurdles when writing laws about AI consists of trying to adequately define what AI means. There is no singular all-agreed upon legally bulletproof standard that everyone has landed on. All manner of definitions exist. Some are helpful, some are not. See my analyses at the link here.

You might be tempted to think that it doesn’t especially matter how we might define AI. Sorry, but you’d be wrong about that.

The issue is that if the AI definition is vaguely specified in a given law, it allows those that develop AI to try and skirt around the law by seemingly claiming that their software or system is not AI-infused. They would argue with great boldness that the law does not apply to their software. Likewise, someone using the software could also claim that the law does not pertain to them because the software or system they are using falls outside of the AI definition stated in the law.

Humans are tricky like that.

One of the shrewdest ways to avoid getting clobbered by a law that you don’t favor is to assert that the law does not apply to you. In this case, you would seek to piecemeal take apart the definition of AEDT. Your goal, assuming you don’t want the law to be on your back, would be to legally argue that the definition given in the law is amiss of what your employment-related computer system is or does.

A law of this kind can be both helped and also at times undercut by having purposely included exclusionary stipulations in the definition.

Take a look again at the definition of AEDT as stated in this law. You hopefully observed that there is an exclusionary clause that says “…does not include a tool that does not automate, support, substantially assist or replace discretionary decision-making processes and that does not materially impact natural persons…”.

On the one hand, the basis for including such exclusion is decidedly helpful.

It seems to be suggesting (in my layman’s view) that the AEDT has to provide a specific purpose and be utilized in a substantive way. If the AEDT is shall we say cursory or peripheral, and if the employment decision is still rather human handmade, perhaps the being used software system should not be construed as an AEDT. Also, if the software or system is not “materially” impacting natural persons (humans), then it doesn’t seem worthwhile to hold its feet to the fire, as it were.

Sensibly, you don’t want a law to overstate its scope and engulf everything including the kitchen sink. Doing so is essentially unfair and burdensome to those that the law was not intended to encompass. They can get caught up in a morass that acts like one of those catch-all fishnets. Presumably, our laws should be careful to avoid dragging the innocent into the scope of the law.

All is well and good.

A savvy attorney is bound to realize that an exclusionary clause can be a kind of legal get-out-of-jail card (as an aside, this particular law stipulates civil penalties, not criminal penalties, so the get-out-of-jail remark is merely metaphorical and for flavorful punchiness). If someone were to contend that a company was using an AEDT in employment processing, one of the first ways to try and overcome that claim would be to argue that the so-called AEDT was actually in the exclusionary realm. You might attempt to show that the so-called AEDT doesn’t automate the employment decision, or it doesn’t support the employment decision, or it doesn’t substantially assist or replace discretionary decision-making processes.

You can then go down the tortuous path of identifying what the words “automate,” “support,” “substantially assist,” or “replace” mean in this context. It is quite a handy legal rabbit hole. A compelling case could be made that the software or system alleged to be an AEDT is part of the exclusionary indications. Therefore, no harm, no foul, regarding this particular law.

Obviously, licensed attorneys should be consulted for such matters (no semblance of legal advice is indicated herein and this is entirely a laymen’s view).

My point here is that there is going to be wiggle room in this new law. The wiggle room will allow some employers that are genuinely using an AEDT to perhaps find a loophole to get around the AEDT usage. The other side of that coin is that there might be firms that aren’t genuinely using an AEDT that will get ensnared by this law. A claim might be made that whatever they were using was indeed an AEDT, and they will need to find a means to show that their software or systems fell outside of the AEDT and into the exclusionary provision.

We can make this bold prediction:

  • There will indubitably be employers that knowingly are using an AEDT that will potentially try to skate out of their legal responsibilities.
  • There will inevitably be employers that aren’t using an AEDT getting bogged down in claims that they are using an AEDT, forcing them to have to do an “extra” effort to showcase that they aren’t using an AEDT.

I’ll be further expounding on these numerous permutations and combinations when we get further along in this discussion. We’ve got a lot more ground to tread.

Using an AEDT per se is not the part of this issue that gives rise to demonstrative concerns, it is how the AEDT performs its actions that get the legal ire flowing. The crux is that if the AEDT also perchance introduces biases related to employment decision-making, you are then in potentially hot water (well, kind of).

How are we to know whether an AEDT does in fact introduce AI-laden biases into an employment decision-making effort?

The answer according to this law is that an AI audit is to be carried out.

I’ve previously and often covered the nature of AI audits and what they are, along with noting existing downsides and ill-defined facets, such as at the link here and the link here, among many other akin postings. Simply stated, the notion is that just like you might perform a financial audit of a firm or do a technology audit related to a computer system, you can do an audit on an AI system. Using specialized auditing techniques, tools, and methods, you examine and assess what an AI system consists of, including for example trying to ascertain whether it contains biases of one kind or another.

This is a burgeoning area of attention.

You can expect this subfield of auditing that is devoted to AI auditing will continue to grow. It is readily apparent that as we will have more and more AI systems being unleashed into the marketplace, and in turn, there will be more and more clamoring for AI audits. New laws will aid in sparking this. Even without those laws, there are going to be AI audits aplenty as people and companies assert that they have been wronged by AI and will seek to provide a tangible documented indication that the harm was present and tied to the AI being used.

AI auditors are going to be hot and in high demand.

It can be an exciting job. One perhaps thrilling element entails being immersed in the latest and greatest of AI. AI keeps advancing. As this happens, an astute AI auditor will have to keep on their toes. If you are an auditor that has gotten tired of doing everyday conventional audits, the eye-opening always-new AI auditing arena proffers promise (I say this to partially elevate the stature of auditors since they are often the unheralded heroes working in the trenches and tend to be neglected for their endeavors).

As an aside, I’ve been a certified computer systems auditor (one such designation is the CISA) and have done IT (Information Technology) audits many times over many years, including AI audits. Most of the time, you don’t get the recognition deserving for such efforts. You can probably guess why. By and large, auditors tend to find things that are wrong or broken. In that sense, they are being quite helpful, though this can be perceived by some as bad news, and the messenger of bad news is usually not especially placed on a pedestal.

Back to the matter at hand.

Regarding the NYC law, here’s what the law says about AI auditing and seeking to uncover AI biases:

  • “The term ‘bias audit’ means an impartial evaluation by an independent auditor. Such bias audit shall include but not be limited to the testing of an automated employment decision tool to assess the tool’s disparate impact on persons of any component 1 category required to be reported by employers pursuant to subsection (c) of section 2000e-8 of title 42 of the United States code as specified in part 1602.7 of title 29 of the code of federal regulations” (NYC, Int 1894-2020, Subchapter 25, Section 20-870).

As a recap, here’s where we are so far on unpacking this law:

  • The law covers Automated Employment Decision Tools (AEDT)
  • A definition of sorts is included to identify what an AEDT is
  • The definition of AEDT also mentions exclusionary provisions
  • The gist is that the law wants to expose AI biases in AEDT
  • To figure out whether AI biases are present, an AI audit is to be done
  • The AI audit will presumably make known any AI biases

We can next dig a bit more into the law.

Here’s what an employment decision consists of:

  • “The term ‘employment decision’ means to screen candidates for employment or employees for promotion within the city” (NYC, Int 1894-2020, Subchapter 25, Section 20-870).

Note that the bounding aspect of “the city” suggests that the matter only deals with employment-related circumstances within NYC. Also, it is worth noting that an employment decision as defined entails screening of candidates, which is the usual connotation of what we think of as an employment decision, plus it includes promotions too.

This is a double whammy in the sense that firms will need to realize that they need to be on top of how their AEDT (if they are using one) is being used for initial employment settings and also when promoting within the firm. You can likely guess or assume that many firms won’t be quite cognizant of the promotions element being within this rubric too. They will inevitably overlook that additional construct at their own peril.

I am going to next provide an additional key excerpt of the law to illuminate the essence of what is being construed as unlawful by this law:

  • “Requirements for automated employment decision tools. a. In the city, it shall be unlawful for an employer or an employment agency to use an automated employment decision tool to screen a candidate or employee for an employment decision unless: 1. Such tool has been the subject of a bias audit conducted no more than one year prior to the use of such tool; and 2. A summary of the results of the most recent bias audit of such tool as well as the distribution date of the tool to which such audit applies has been made publicly available on the website of the employer or employment agency prior to the use of such tool…” (NYC, Int 1894-2020, Subchapter 25, Section 20-871). There are additional subclauses that you might want to take a look at, if you are keenly interested in the legal wording.

Skeptics and critics have argued that this seems somewhat tepid as to the unlawful activity being called out.

They say that the law only narrowly and minimally focuses on conducting an AI audit and publicizing the results, rather than on whether the AI audit discovered AI biases and what if any ramifications this has had in the making of employment decisions that come under the scope of this law. In essence, it is apparently unlawful to not opt to conduct such an AI audit (when applicable, as discussed earlier), plus it is also unlawful in the instance if you do conduct the AI audit but do not publicize it.

The law seems silent on the question of whether AI biases were detected and present or not. Likewise, silence about whether the AI biases impacted anyone related to a salient employment decision-making activity. The key is to seemingly plainly “merely” conduct an AI audit and tell about it.

Does this law not go far enough?

Part of the counterargument for contending that this is seemingly satisfactory as to the range or scope of what this law encompasses is that if an AI audit does find AI biases, and if those AI biases are tied to particular employment decision-making instances, the person or persons so harmed would be able to pursue the employer under other laws. Thus, there is no need to include that aspect in this particular law.

Purportedly, this law is intended to bring such matters to light.

Once the light of day is cast upon these untoward practices, all manner of other legal avenues can be pursued if AI biases are existent and impactful to people. Without this law, the argument goes that those using AEDTs would be doing so while possibly running amok and have potentially tons of AI biases, for which those seeking employment or those seeking promotions would not know is taking place.

Bring them to the surface. Make them tell. Get under the hood. See what is inside that engine. That is the mantra in this instance. Out of this surfacing and telling, additional actions can be undertaken.

Besides seeking legal action as a result of illuminating that an AI audit has perhaps reported that AI biases were present, there is also the belief that the posting of these results will bring forth reputational repercussions. Employers that are being showcased as using AEDTs that have AI biases are going to likely suffer societal wraths, such as via social media and the like. They will become exposed for their wicked-doing and shamed into correcting their behavior, and might also find themselves bereft of people seeking to work there due to the qualms that AI biases are preventing hiring or usurping promotions.

The stated penalties associated with being unlawful are this:

  • “Penalties. a. Any person that violates any provision of this subchapter or any rule promulgated pursuant to this subchapter is liable for a civil penalty of not more than $500 for a first violation and each additional violation occurring on the same day as the first violation, and not less than $500 nor more than $1,500 for each subsequent violation” (NYC, Int 1894-2020, Subchapter 25, Section 20-872). There are additional subclauses that you might want to take a look at, if you are keenly interested in the legal wording.

Skeptics and critics contend that the penalties are not harsh enough. A large firm would supposedly scoff or laugh at the minuscule dollar fines involved. Others point out that the fine could end up being more than meets the eye, such that if a firm were to have a thousand dollars of violations each day (only one scenario, there are lots of other scenarios), a year’s worth would be around $365,000, assuming the firm simply ignored the law for an entire year and got away with doing so (seems hard to imagine, but could happen, and could even occur longer or for a higher culmination of daily fines, in theory).

Meanwhile, some are worried about smaller businesses and the associated fines. If a small business that is barely making ends meet gets hit with the fines, and supposedly did so not by a deliberate motivation to circumvent the law, the fines could materially affect their teetering business.

The Keystone Problematic Considerations At Issue

I have a simple and straightforward question for you.

In the context of this law, what exactly constitutes an AI audit?

Problematically, there is no definitive indication within the narrative of the law. All that we seem to be told is that the “bias audit” is to be performed via “an impartial evaluation by an independent auditor” (as per the wording of the law).

You can drive a Mac truck through that gaping hole.

Here’s why.

Consider this rather disconcerting example. A scammer contacts a firm in NYC and explains that they provide a service such that they will do a so-called “bias audit” of their AEDT. They pledge they will do so “impartially” (whatever that means). They hold themselves out as an independent auditor, and they have anointed themselves as one. No need for any kind of accounting or auditing training, degrees, certifications, or anything of the sort. Maybe they go to the trouble to print some business cards or hastily put up a website touting their independent auditor standing.

They will charge the firm a modest fee of say $100. Their service consists of perhaps asking a few questions about the AEDT and then proclaiming that the AEDT is bias-free. They then send a report that is one page in size and declares the “results” of the so-called audit. The firm dutifully posts this onto its website.

Has the firm complied with this law?

You tell me.

Seems like they have.

You might immediately be taken aback that the audit was done in a cursory fashion (that’s being polite and generous in this particular scenario). You might be disturbed that the bias detection (or lack thereof) was perhaps essentially predetermined (voila, you appear to be bias-free). You might be upset that the posted results could give an aura of having passed a rigorous audit by a bona fide seasoned, trained, experienced, certified auditor.

Yes, that does about size things up.

An employer might be relieved that they got this “silly” requirement completed and darned happy that it only cost them a measly $100. The employer might internally and quietly realize that the independent audit was a charade, but that’s not seemingly on their shoulders to decide. They were presented with a claimed independent auditor, the auditor did the work that the auditor said was compliant, the firm paid for it, they got the results, and they posted the results.

Some employers will do this and realize that they are doing wink-wink compliance with the law. Nonetheless, they will believe they are being fully compliant.

Other employers might get conned. All that they know is the need to comply with the law. Luckily for them (or so they assume), an “independent auditor” contacts them and promises that a complaint audit and result can be had for $100. To avoid getting that $500 or more daily fine, the firm thinks they have been handed a gift from the heavens. They pay the $100, the “audit” takes place, they get a free bill-of-health as to their lack of AI biases, they post the results, and they forget about this until the next time they need to do another such audit.

How is every firm in NYC that is subject to this law supposed to know what is bona fide compliance with the law?

In case you aren’t already somewhat having your stomach churn, we can make things worse. I hope you haven’t had a meal in the last few hours since the next twist will be tough to keep intact.

Are you ready?

This sham service provider turns out to be more of a shammer than you might have thought. They get the firm to signup for the $100 service to do the impartial bias audit as an independent auditor. Lo and behold, they do the “audit” and discover that there are biases in every nook and corner of the AEDT.

They have AI biases like a cockroach infestation.

Yikes, says the firm, what can we do about it?

No problem, they are told, we can fix those AI biases for you. It will cost you just $50 per each such bias that was found. Okay, the firm says, please fix them, thanks for doing so. The service provider does a bit of coding blarney and tells the firm that they fixed one hundred AI biases, and therefore will be charging them $5,000 (that’s $50 per AI bias to be fixed, multiplied by the 100 found).

Ouch, the firm feels pinched, but it still is better than facing the $500 or more per day violation, so they pay the “independent auditor” and then get a new report showcasing they are now bias-free. They post this proudly on their website.

Little do they know that this was a boondoggle, a swindle, a scam.

You might insist that this service provider should be punished for their trickery. Catching and stopping these tricksters is going to be a lot harder than you might imagine. Just like going after those foreign-based princes that have a fortune for you are likely in some foreign land beyond the reach of United States law, the same might occur in this instance too.

Expect a cottage industry to emerge due to this new law.

There will be bona fide auditors that seek to provide these services. Good for them. There will be sketchy auditors that go after this work. There will be falsely proclaimed auditors that go after this work.

I mentioned that the service provider scenario involved asking for $100 to do the so-called AI audit. That was just a made-up placeholder. Maybe some will charge $10 (seems sketchy). Perhaps some $50 (still sketchy). Etc.

Suppose a service provider says it will cost $10,000 to do the work.

Or $100,000 to do it.

Possibly $1,000,000 to do so.

Some employers won’t have any clue as to how much this might or should cost. The marketing of these services is going to be a free-for-all. This is a money-making law for those that legitimately perform these services and a money maker for those that are being underhanded in doing so too. It will be hard to know which is which.

I’ll also ask you to contemplate another gaping hole.

In the context of this law, what exactly constitutes an AI bias?

Other than the mention of the United States code of federal regulations (this doesn’t particularly answer the question of AI biases and does not ergo serve as a stopgap or resolver on the matter), you would be hard-pressed to assert that this new law provides any substantive indication of what AI biases are. Once again, this will be entirely open to widely disparate interpretations and you will not especially know what was looked for, what was found, and so on. Also, the work performed by even bona fide AI auditors will almost likely be incomparable to another, such that each will tend to use their proprietary definitions and approaches.

In short, we can watch with trepidation and concern for what employers will encounter as a result of this loosey-goosey phrased though well-intended law:

  • Some employers will know about the law and earnestly and fully comply to the best of their ability
  • Some employers will know about the law and marginally comply with the slimmest, cheapest, and possibly unsavory path that they can find or that comes to their doorstep
  • Some employers will know about the law and believe they aren’t within the scope of the law, so won’t do anything about it (though turns out, they might be in scope)
  • Some employers will know about the law and flatly decide to ignore it, perhaps believing that nobody will notice or that the law won’t be enforced, or the law will be found to be unenforceable, etc.
  • Some employers won’t know about the law and will get caught flatfooted, scrambling to comply
  • Some employers won’t know about the law and will miserably get fleeced by con artists
  • Some employers won’t know about the law, they aren’t within scope, but they still get fleeced anyway by con artists that convince them they are within the scope
  • Some employers won’t know about the law and won’t do anything about it, while miraculously never getting caught or being dinged for their oversight
  • Other

One crucial consideration to keep in mind is the magnitude or scaling associated with this new law.

According to various reported statistics regarding the number of businesses in New York City, the count is usually indicated as somewhere around 200,000 or so enterprises (let’s use that as an order of magnitude). Assuming that this is a reasonable approximation, presumably those businesses as employers are subject to this new law. Thus, take the above-mentioned several ways in which employers are going to react to this law and contemplate how many will be in each of the various buckets that I’ve just mentioned.

It is a rather staggering scaling issue.

Additionally, according to reported statistics, there are perhaps 4 million private sector jobs in New York City, plus an estimated count of 300,000 or so government workers employed by the NYC government (again, use those as orders of magnitude rather than precise counts). If you take into account that new hires are seemingly within the scope of this new law, along with promotions associated with all of those existing and future workers, the number of employees that will in one manner or another be touched by this law is frankly astounding.

The Big Apple has a new law that at first glance appears to be innocuous and ostensibly negligible or mundane, yet when you realize the scaling factors involved, well, it can make your head spin

Conclusion

I mentioned at the beginning of this discussion that this is a well-intended new law.

Everything I’ve just described as potential loopholes, omissions, gaps, problems, and the like, could all be easily anticipated. This is not rocket science. I might add, there are even more inherent concerns and confounding aspects to this law that due to space constraints herein I haven’t called out.

You can find them as readily as you can shoot fish in a barrel.

Laws of this kind should be carefully crafted to try and prevent these kinds of sneaky end-arounds. I assume that the earnest composers sought to write a law that they believed was relatively ironclad and would maybe, in the worst case, have some teensy tiny drips here or there. Regrettably, it is a firehose of drips. A lot of duct tape is going to be needed.

Could the law have been written in a more elucidated way to close off these rather apparent loopholes and associated issues?

Yes, abundantly so.

Now, that being the case, you might indignantly exhort that such a law would undoubtedly be a lot longer in length. There is always a tradeoff of having a law that goes on and on, becoming unwieldy, versus being succinct and compact. You don’t though want to gain succinctness at a loss of what would be substantive and meritorious clarity and specificity. A short law that allows for shenanigans is rife for troubles. A longer law, even if seemingly more complex, would usually be a worthy tradeoff if it avoids, averts, or at least minimizes downstream issues during the adoption stage.

Saint Augustine famously said: “It seems to me that an unjust law is no law at all.”

We might provide a corollary that a just law that is composed of problematic language is a law begging to produce dour problems. In this case, we seem to be left with the wise words of the great jurist Oliver Wendell Holmes Jr., namely that a page of history is worth a pound of logic.

Be watching as history is soon about to be made.

Source: https://www.forbes.com/sites/lanceeliot/2022/09/23/ai-ethics-and-the-looming-debacle-when-that-new-york-city-law-requiring-ai-biases-audits-kicks-into-gear/