AI Ethics Mulling Over The Merits Of Legally Mandating Atonement Funds To Ensure Accountability For AI Acting Badly

Who’s to blame?

That might seem like a straightforward question, though in the case of the famous comedy bit by the legendary duo of Abbott and Costello about a baseball team, the “who” can be confusing. You might vaguely be familiar with the Who’s On First comedic routine that they made into one of the most enduring skits of all time (spoiler alert for those of you that haven’t heard it).

Abbott tells Costello that Who’s on first, What’s on second, and I Don’t Know is on third. The clever trickery is that the first baseman is named Who, the second basement is named What, and the third baseman is named I Don’t Know. Of course, those phrases also have their conventional meaning and thus trying to interpret what Abbott is saying can be entirely befuddling. Indeed, Costello asks the seemingly innocuous question of who is on first, for which the answer is a firmly stated yes. But this doesn’t make sense to Costello since he was expecting a name and instead received a perplexing yes as an answer.

Shifting gears, when it comes to the advent of Artificial Intelligence (AI), one of the most vexing questions that keep getting asked entails who or perhaps what is going to be held accountable when AI goes astray.

I’ve previously discussed the criminal accountability for when AI leads to or undertakes criminal actions, see my coverage at the link here. There is also the matter of civil accountability such as who or what you might sue when AI has done you wrong, which is the topic I’ll be discussing herein. All of this has significant AI Ethics considerations. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.

The precepts of AI Ethics get us to remain vigilant. AI technologists can at times become preoccupied with technology, particularly the optimization of high-tech. They aren’t necessarily considering the larger societal ramifications. Having an AI Ethics mindset and doing so integrally to AI development and fielding is vital for producing appropriate AI, including (perhaps surprisingly or ironically) the assessment of how AI Ethics gets adopted by firms.

Besides employing AI Ethics precepts in general, there is a corresponding question of whether we should have laws to govern various uses of AI. New laws are being bandied around at the federal, state, and local levels that concern the range and nature of how AI should be devised. The effort to draft and enact such laws is a gradual one. AI Ethics serves as a considered stopgap, at the very least, and will almost certainly to some degree be directly incorporated into those new laws.

Be aware that some adamantly argue that we do not need new laws that cover AI and that our existing laws are sufficient. In fact, they forewarn that if we do enact some of these AI laws, we will be killing the golden goose by clamping down on advances in AI that proffer immense societal advantages.

A heated debate taking place is whether existing laws are able to adequately address the emergence of AI systems throughout society. Legal liability typically requires that you can pin the tail on the donkey as to who is responsible for harmful conduct. In the case of AI, there might be a rather unclear path that ties a particular person or persons to the AI that performed some injurious action. The AI might be largely untraceable to the source or inventor that composed the AI.

Another consideration is that even if the roots of the AI can be traced to someone, the question is whether the person or persons might not have been able to reasonably foresee the adverse outcome that the AI ultimately produced. The crux of foreseeability is a customarily notable factor in assessing legal liability.

You might be tempted to think that you can simply go after the AI itself and name the AI as the legal party accountable or responsible for whatever harm has allegedly been incurred. By and large, the prevailing legal view is that AI has not yet reached a level of legal personhood. Thus, you won’t be able to strictly speaking seek to get the AI to pay up and will need to find humans that were working the levers behind the scenes, as it were (for my analysis of legal personhood for AI, see the link here).

Into all of this potential legal morass steps an idea that is being floated as a possible remedy, either on a short-term basis or possibly on a long-term haul. The idea is that perhaps a special compensatory fund ought to be established to provide financial relief for those that have been harmed by AI. If you are otherwise unable to get the AI to compensate you, and you cannot nail down the persons that ought to be presumably held accountable, the next best option might be to tap into a compensatory fund that aims to aid those harmed by AI.

Such a fund would be akin to a form of insurance of sorts, as stated in a thought-provoking research paper: “This would essentially be an insurance mechanism against uncertainty: A clear and transparent framework for speedy compensation in cases where a liability suit has uncertain or no prospect of success owing to the unforeseeable nature of the damaging conduct, the (type of) damage itself, or the excessive costs and/or complexity of the procedure” (article by Olivia Erdelyi and Gabor Erdelyi, “The AI Liability Puzzle And A Fund-Based Work-Around”, Journal of Artificial Intelligence Research, 2021).

The compensatory fund would be part of an overarching AI Guarantee Scheme (AIGS) and be accompanied by some light touch alterations to existing laws about legal liability. The light touch would be presumably easier to enact and not require the arduous kind of legal and societal angst if a more gut-wrenching series of demonstrative changes were made to the existent legal regimes. Per the researchers: “This reflects our belief that – despite the appeal of such a quick-fix solution – the unaltered application of existing liability rules to AI or a protectionistically motivated recourse to strict liability with a view to establish responsibility at any cost are not the correct answers for three reasons: First, ignoring that those rules have been tailored to different circumstances and may hence be inappropriate for AI, they contravene the delicately balanced objectives of the legal liability system. Second, they inhibit AI innovation by adopting an unduly punitive approach. Third, undue resort to strict liability merely circumvents foreseeability and fault problems in a dogmatically inconsistent manner rather than remedying them” (as per the paper cited above).

Arguments in favor of such AI compensatory funds include:

  • Reduces the need for lengthy and costly legal trials to cope with AI-inflicted harms
  • Reassures humans that they can make use of AI and be compensated if harmed
  • Promotes AI innovation by alleviating legal uncertainty facing AI innovators
  • Can be placed into use far faster than making massive changes to existing laws
  • Proffers a relatively clear-cut remedy that is reliable and readily available
  • Other

Meanwhile, those that oppose the AI compensatory funds approach say this:

  • Let’s AI makers excessively off-the-hook and allow them to skirt accountability
  • Will embolden AI makers to craft AI that lacks dutiful safety and proper controls
  • Might spur people into falsely claiming AI harms so that they can tap into the funds
  • Sidesteps and undermines the true need to overhaul our laws to govern AI sufficiently
  • Could become a bureaucratic nightmare that bogs down and misuses the funds
  • Other

As might be evident, there are both proponents and opponents of this altogether controversial notion.

You would be hard pressed to rule out summarily the AI compensatory fund as a potential approach to the rising concerns about AI that causes harm. Nor is the proposed solution a slam dunk.

One viewpoint is that AI makers would need to put monies into the funds as part of their efforts when devising and promulgating AI. This could be construed as a kind of fee or tax that they are required to bear as part of being able to release their AI to the world. But does this added cost potentially suppress efforts by startups that are trying to push the boundaries of today’s AI? And how would the enforcement of making sure that AI makers paid their fee or tax be handled?

A slew of questions arise and would need to be hammered out:

  • Which countries would an AI compensatory fund best be feasible in?
  • Could a global semblance of interconnected AI compensatory funds be established?
  • What would be the detailed and workable mechanisms associated with such funds?
  • How are the AI compensatory funds to be funded (public, private, charitable)?
  • Would this be a no-fault insurance basis or would some other approach be taken?
  • Etc.

A realm that has already had the AI compensatory funds idea bandied around consists of autonomous systems such as autonomous vehicles and self-driving cars. For my coverage of self-driving cars and AI autonomous systems, see the link here.

Here’s a sketch of how this might work for AI-based self-driving cars.

Suppose a self-driving car crashes into a bike rider. The bike rider is harmed. The bike rider might seek legal redress by pursuing the automaker of the autonomous vehicle. Or they might aim at the self-driving tech firm that made the AI driving system. If the self-driving car is being operated as a fleet, another legal avenue would be to pursue the fleet operator. Trying to sue the AI is not an option at this juncture as the legal personhood of AI is not as yet established.

Rather than taking legal action against any of those parties, another recourse would be to file an application or claim to a suitable AI compensatory fund. The fund would have formalized processes involving the review of the claim, and then determine what if any compensatory payments might be provided to the claimant. There might be an appeals process that aids in claimants that believe they were either wrongly denied by the fund or insufficiently compensated by the fund.

In theory, the AI compensatory fund would be a much speedier path toward getting compensated for the harm inflicted. You can imagine how laborious a lawsuit might be, whereby the firms being sued might try to drag out the case.

Attorneys might though emphasize that the AI compensatory fund could let those other parties such as the AI makers seemingly avoid any clearcut penalty for having let loose their self-driving car on public roadways that ended up striking a bike rider. What else might those firms opt to “carelessly” do? Without the looming specter of the legal sword dangling over their heads, we could find ourselves daily confronting AI that is rife with endangering capacities.

Round and round the arguments go.

Conclusion

AI Ethics reminds us that we should always be considering the ethical and legal ramifications of AI. In this case of the AI compensatory funds, the proposed notion of an insurance-like pool of funds for compensating those that are harmed by AI does seem alluring. The funds would seemingly be waiting there, ready to be tapped into, and provide the soonest possible compensation.

The tradeoffs of whether this might open the floodgates toward making AI that has fewer and fewer safety controls is a daunting and all too real concern. We probably don’t need to add fuel to a fire that is perhaps already somewhat underway.

Can we somehow still hold AI makers to devising appropriate Ethical AI and simultaneously establish these AI compensatory funds?

Some would say that yes, we can. By rejiggering existing laws to align with the AI compensatory funds, those that are harmed by AI would potentially have a dual path to seek their just compensation.

Who is on first?

Yes, that’s who (as in us all) are on first notice that we should be mulling over the potential use of AI compensatory funds and modifying existing laws even if only lightly so, providing a means to deal with the onslaught of both good AI and bad AI.

There’s no confusion about that vital consideration.

Source: https://www.forbes.com/sites/lanceeliot/2022/08/10/ai-ethics-mulling-over-the-merits-of-legally-mandating-atonement-funds-to-ensure-ai-accountability/