Workshop On Soft Law Governance Of AI Applications Knocks It Out Of The Park And Dazzles With Keen Insights On AI Ethics And AI Legal Impacts

In today’s column, I am going to showcase and discuss the key findings of a Center for Law, Science, and Innovation (LSI) workshop undertaken by the Arizona State University (ASU) Sandra Day O’Connor College of Law that took place in Washington D.C. and was entitled “Soft Law Governance of AI Applications” (here’s a link to the Center for LSI, see the link here and an additional link to the Soft-Law Governance of Artificial Intelligence project therein, see the link here).

This was a special gathering of top experts and renowned scholars in the realm of AI law and associated disciplines. A set of nine research papers was presented over a two-day period (the papers were considered near-final drafts and will be subsequently refined by the authors and made available online, see the links mentioned above). Attendees consisted of esteemed speakers along with carefully chosen commentators who offered insightful remarks and spurred spirited thoughtful discussion and debate on the topics being covered. I was honored to participate as a commentator and relished the lively format and erstwhile engagement.

Chairing the event was Professor Gary Marchant, a highly regarded and globally recognized scholar and expert in these matters who serves as a Regent’s Professor and Faculty Director at the Center for Law, Science, and Innovation (see his bio at the link here). He was joined collaboratively in leading the program by Eric Hitchcock, Executive Director of the Center for Law Science & Innovation, an experienced attorney and former Chief Hearing Officer, Administrative Law Judge, and Judge Pro Tempore (see his bio at the link here). The two of them teamed up to keep the workshop moving at a heady pace and ensured that there was always active dialogue and energetic progress being made.

The papers were categorized into one of three overarching classifications: (1) Automated Vehicles, (2) Neurotechnology and Global AI Governance, and (3) Medicine and Healthcare.

Avid readers of my column will recognize that those are abundantly three of my favorite topics when it comes to examining the nature of AI and the law. Autonomous vehicles raise a wide variety of AI ethics and legal AI issues, see some of my analyses at the link here and the link here, just to name a few. Likewise, the same could be said for the emergence of neurotechnology, see my coverage at the link here for example, and for the increasing calls for AI global governance, see the link here and the link here. We also need to face up to the reality that AI in medicine and healthcare has a plethora of complex AI soft law and AI hard law conundrums, such as my coverage at the link here and the link here.

For my overall ongoing analyses of the latest in AI and the law, see the link here and the link here.

The notable beauty as it were of a workshop that includes all of the above listed vital topics is that there is a crucial synergy that arises amidst those intersecting areas. Much of the time there are AI law events devoted exclusively to one or another of those major categories. Though the singular topic approach is handy, it often fails to garner important overlaps and intersections. The bottom line is that getting a multitude of diverse viewpoints gathered into one place at one time can give rise to cross-disciplinary considerations that might not otherwise be considered or discussed.

Bravo for doing so.

In case you might be interested in a future event of a similar eclectic combination, mark your calendars for May 16 and May 17, 2024, when the annual Governance of Emerging Technologies and Science (GETS) Conference takes place (see upcoming details at the link here of the ASU Sandra Day O’Connor College of Law).

The Two-Pronged Nature Of AI Soft Law And AI Hard Law

Before I jump into unpacking the details of the nine papers, let’s make sure we are all on the same page when it comes to the nature of both AI soft laws and the likes of AI hard laws.

First, overall, the parlance of any studious look at the law in total will tell you that our laws are roughly grouped into either being those on-the-books laws that are said to be hard laws and also consist of a wide array of ethical codes, standards, and the like, collectively known as soft laws. The international OECD (Organization for Economic Co-operation and Development) defines soft law in this handy way, namely “cooperation based on instruments that are not legally binding, or whose binding force is somewhat ‘weaker’ than that of traditional law, such as codes of conduct, guidelines, roadmaps, and peer reviews” (see the link here).

You can apply the same form of duality of codification to AI laws, namely that there are AI soft laws to be considered and there are also AI hard laws that need to be given their suitable due.

I’ve discussed in-depth for example the United Nations UNESCO set of AI ethics guidelines or AI soft laws that nearly 200 countries signed onto (see the link here). I have also examined numerous AI hard laws, including analyzing the not-yet-law EU Artificial Intelligence Act (AIA) that when fully passed, assuming it is passed, will not only roil the EU but will undoubtedly have significant impacts on AI and the emergence of AI hard laws across the globe (see the link here).

Anyone involved in or caring about AI should have their eyes clearly aimed at what is happening and is likely to happen in the near future when it comes to AI soft laws and AI hard laws. If you have your head in the sand or are living in a cave, you are going to get blindsided by these laws. Some seem to erroneously think that these laws are only of importance to AI makers. Nope, that’s a narrow and misunderstood falsehood. AI soft laws and AI hard laws will in fact impact AI makers and furthermore undeniably and inarguably impact entities that opt to adopt AI (plus, impact consumers too). The odds are that anyone making, using, or otherwise coming into contact with AI will in one way or another be guided by, shaped by, or feel the impacts of AI soft laws and AI hard laws (see my elaborated discussion at the link here).

Professor Gary Marchant has noted that “soft law mechanisms include various types of instruments that set forth substantive expectations but are not directly enforceable by government” (see the link here). This revealing wording brings up a related matter that I’d like to briefly address. Indeed, he asked a pointed question at the workshop about why it is that at times the role of AI soft law does not get as much attention as it wholeheartedly deserves.

As a frequent speaker at AI law conferences, I often encounter attendees that seem to regrettably downplay AI soft laws and seem to overplay AI hard laws. You might say that AI soft laws often don’t get their appropriate respect (this is reminiscent of the classic joke by comedian Rodney Dangerfield, in which he tongue-in-cheek says “With my dog, I don’t get no respect. He keeps barking at the front door. He doesn’t want to go out. He wants me to leave!”).

I have found that AI soft law seems to get underestimated for these four major reasons:

  • (1) Phrasing and connotations. People viscerally react to the phrase “soft law” as though this is mushy or flimsy law, whereas “hard law” sounds like rigorous or robust law. This is a naturally ingrained inclination due to the words being used. We aren’t likely to overcome this instinctive tendency unless a magic wand is waved and we wholescale change the nomenclature (a logistic nightmare and untenable possibility).
  • (2) Scattered semblance. It is relatively easy to point to a set of hard laws and say those are the hard laws you need to know. You cannot as readily do the same for soft laws. Soft laws are essentially all over the place. Some are in this cupboard or that cabinet, others are here or there. Their very scattered existence can confound and confuse.
  • (3) Perceived lack of teeth. The mighty weight of the government can come down on those scofflaws who flaunt the hard laws. The same is not as readily said for soft laws. This gives rise to a perception that soft laws can be disobeyed or disregarded. Why care about something that doesn’t seem to be a dangling sword looming over your neck? Of course, this overlooks the potential for legal means to go after those that sorely or flagrantly cross across the soft laws.
  • (4) Absence of visibility. When hard laws get passed, there is usually a lot of heat and light devoted to what those laws portend. People hear about it. Unfortunately, when soft laws get established, you can at times hear nothing but crickets. This is sad. After all that time and effort, and the value that the soft laws provide, the word doesn’t particularly spread about how vital the soft laws are. Maybe some solid public relations and vigorous marketing are needed.

Anyway, the good news in the case of this workshop is that soft law was given its proper place on the revered throne of AI laws. My opting to convey and amplify this valuable content is a hoped-for means of declaring that AI soft laws do matter. I urge all of today’s AI stakeholders including AI researchers, scholars, academics, consultants, practitioners, lawyers, judges, reporters, journalists, influencers, and the like to be cognizant of AI soft laws.

Think of AI laws all told and immediately grab hold of both the venerated AI hard laws and the vital AI soft laws. Don’t let AI soft laws be like the Maytag repairman who used to sit around and be all alone.

Embrace AI soft laws.

You’ll be AI law blindsided, for sure, if you don’t.

Workshop Papers And Overall Commentary

The workshop began with opening remarks by Professor Gary Marchant and then a succinct foundational overview of soft laws and hard laws associated with autonomous vehicles was provided by David Bonelli (partner at Venable LLP). There were then three papers presented encompassing the nature of AI soft laws related to autonomous vehicles.

I will discuss the papers in order of presentation.

Please know that due to space limitations herein, I aim merely to depict briefly what each paper contains (make sure to access the full papers, once posted online by the Center for LSI, for the full details, thanks). After introducing each successive paper, I will provide some remarks of my own that at times imbue selected aspects of the overall discussions during the workshop engagements.

  • 1. The first paper was jointly authored by Helen Gould, Intel (retired), and Jeff Gurney, Nelson Mullins, and was entitled “Use of Industry Consensus Standards as a Soft Law Mechanism to Safely Deploy Automated Driving Systems.”

Overall, this paper notably and rightfully asserts that soft laws are vital to address the safe deployment of autonomous vehicles (AVs). Furthermore, industry consensus standards are particularly crucial, and the paper identifies and explores many of the existing and underway standards (side note, I serve or have served on some of those committees). A helpful key factors chart lays out both the advantages and limitations associated with soft law governance and hard law governance in the realm of AVs. Soft law is especially significant due to serving as a flexible and efficient tool for trustworthy and reliable AVs.

Recommendations are given to regulators, industry, and academia that are involved with AV considerations. For example, in terms of regulators, the authors urge that those involved in regulation development either participate in or at least closely keep current on the latest efforts in industry consensus-making for AVs. The regulation process can be boosted by leveraging the best practices and consensus standards being formulated by the industry. Trying to reinvent the wheel when coming up with regulations (no pun intended), from scratch, is shortsighted when already vetted and robust standards are readily at hand.

For those of you who are generally interested in AVs and policy-making thereof, you might find insightful a policy paper that I co-authored with a Harvard University thinktank (the Harvard Kennedy School (HKS) via the Taubman Center for State and Local Government), see the link here. If you are curious about the levels of autonomy associated with AVs, I’ve extensively covered the topic such as the link here and the link here. We are gradually going to have AVs entering into prime-time use, especially in major cities, and there is a slew of soft law and hard law implications worthy of pressing attention.

  • 2. The second paper was authored by Carlos Gutierrez and entitled “Soft Law as A Means to Norm Communication Between Individuals and The Behavior of Autonomous Vehicles.”

In this paper, the role of communication is noted as essential for how autonomous vehicles will interact with humans that are on or near our roadways. Communication modes and means are an important element and absolutely essential for the overall acceptance and safety of AVs on our public streets and open highways.

I’ve discussed at length in my column coverage that human drivers and pedestrians already have pre-established culturally derived signaling and messaging schemes. A favorite example is that in some cities if a human driver and a pedestrian make direct eye contact, the human driver is construed as the winner and can proceed (the pedestrian cedes way to the human driver). Meanwhile, in other cities, the making of eye contact is understood as an acknowledgment that the pedestrian can proceed, and the human driver will bring their car to a waiting halt, see the link here.

What happens when a fully autonomous vehicle has an empty driver’s seat as there is no need for a human driver to be onboard? All kinds of ploys and schemes are being devised. One approach consists of mounting orbs on the hood of the AV and having them light up or shift direction to clue in pedestrians as to what the AV is going to next do. Another approach consists of making available V2P, vehicle-to-pedestrian, electronic communications that would send alerts to a nearby pedestrian’s smartphone, allowing either one-way or two-way communications with the AV. And so on, see my coverage at the link here.

The paper homes in on three major facets of communications regarding AVs, consisting of safety, infrastructure support, and profit motive. The profit motive is a topic that hasn’t been getting as much attention as it will one day receive, partially because we are still in the early stages of AV deployment. I was pleased to see that the profit motive factor was included and is getting renewed interest.

I have previously laid out what I refer to as the “roving eye” of AVs (see the link here).

Here’s what that means. Envision that self-driving cars are roaming constantly throughout your community. These AVs are video recording whatever they happen upon. If you stitched together this voluminous digital data, you could pretty much glean a lot of monetizable information. Real estate agents might be willing to pay to find out which homes in a given neighborhood might be ready for going onto the market. Going deeper, a sports gear company might be eager to pay a fee to discover that a father and son were tossing a baseball in their front lawn (i.e., by using the address, the sports gear maker could target them with mailers about buying a new baseball bat and mitt). On and on this goes.

The worry too is that this is going to be intrusive and launch us into the insidious era of Big Brother. The stitched-together AV collected video data could show you leaving your house in the morning, driving to work, walking to lunch at a nearby diner, and driving home at night. Your every move has been recorded. The other side of this coin is that we might be able to trace a criminal from the outcome of a criminal act, essentially reversing time to see where their hideout might be. For my analyses of these considerations, see the link here and the link here.

Kudos to the paper for emphasizing that soft law must continue to address these evolving communications issues. As the author notes, there is a substantial need to keep moving ahead on voluntary standards, pilot programs, safety certifications, and a cornucopia of communications dimensions that will be guardrails for trying to ensure that AVs operate safely and prudently.

  • 3. The third paper was authored by Tracy Pearl, University of Oklahoma, and entitled “Cooperative Control: Autonomous Vehicles, Industry Accreditation, And Soft Law Regulatory Regimes.”

This paper is an enriching and informative look at the AV industry via the clever analogous milieu of the amusement park industry. In a fascinating twist, the paper examines how soft law aided in shaping the amusement park industry. There are lessons to be learned that can be applied to the AV industry.

I am a big proponent of trying to find analogous settings that can inform and reveal underpinnings for gauging what and where the AV industry might be heading. A compelling case is made that indeed the soft law evolutionary impacts on amusement parks can spur ideas and conventions for coping with the emergence of AVs. I found this especially invigorating since I previously had done extensive consulting work for Walt Disney Imagineering (WDI), the arm of Disney that conceives of and designs Disney theme parks. Plus, I had done consulting work for other theme parks, such as Knott’s Berry Farm (before its buyout by Cedar Fair).

As an example of one of the several recommendations in the paper, the noteworthy point is made that during the 1970s the major amusement parks coalesced into several focused industry groups that sought to devise overarching industry safety standards. Could we today have the same occur in the AV industry? There have been some akin formulations, though none on par with the magnitude and intensity of what occurred in the amusement park industry. But, as they say, hope springs eternal.

The next portion of the workshop shifted to the second overall topic, neurotechnology, and global AI governance. Let’s take a look at those papers.

  • 4. The fourth paper of the event was authored by Lucille Tournas, Arizona State University, and entitled “Mergers and Acquisitions Agreements as A Soft Law Tool for Neurotechnology.”

This paper takes a deep dive into the neurotechnology field, such as the ongoing and advancing arena of brain-computer interfaces (BCI). Readers of my column are likely aware that I’ve been covering the BCI realm, especially periodically examining the famous or some say infamous Elon Musk BCI-firm Neuralink, see the link here and the link here.

Providing yet another indication of the value of soft law, the author emphasizes that neurotechnology as a fast-advancing and ever-changing technology requires the readiness and adaptability of soft law. I fully agree. Furthermore, the paper notes that not enough is being done on the soft law front for neurotechnology. I again agree with this. We must push harder to ensure that soft law gets (as the author urges) a profoundly needed seat at the governance table in this space.

The mainstay of the paper is to explore the leveraging that occurs when neurotechnology companies undergo a merger and acquisition (M&A), such as the Alphabet acquisition of DeepMind.

Here’s what this is about. There seems to be a common thread amongst neurotechnology startups that the founder or founders are often wedded to a mission or vision that tends to incorporate soft law-related values. The acquiring firm might not see eye-to-eye on those precepts. A question arises as to whether the founder or founders can try to keep their soft law embracement or whether they will have to forgo or diminish their beliefs.

In my experience, the dynamics vary depending upon the perceptions of the buying firm and the being acquired firm. Sometimes, the acquiring firm is exceedingly eager to buy the startup, thus, nearly any reasonable set of soft law-related commitments is willingly included in the deal. On the other hand, sometimes the acquiring firm is in the driver’s seat. This can be rough on the being acquired firm. The startup and its founder or founders will need to make gut-wrenching decisions as to how much of their dogmatic beliefs they are willing to sacrifice to make the deal happen.

There are essentially four main variations, as I see it:

  • (1) The acquiring a firm has heightened leverage over soft law concerns.
  • (2) The being-acquired firm has heightened leverage over soft law concerns.
  • (3) Mutual interest in soft law concerns that can be agreeably resolved.
  • (4) No particular soft law concerns on either side of the table.

You might visualize this as a four-square. We have the acquirer on one side of the four-square, and the being acquired firm on the other side. Each of the two squares per side is labeled as soft law engaged or not soft law engaged.

As noted, there are times when the acquiring firm has strongly held soft law precepts and is going to want to drive those fervently into the being acquired firm. If the startup has no particular soft law interests, this probably works out suitably, and the deal proceeds. If the startup also has soft law beliefs, there is then a reconciliation process that takes place. Often, since both parties are already up-to-speed on soft law considerations, they can iron out whatever differences might exist.

If neither the acquiring firm nor the startup has much grounding in soft law beliefs, they both proceed into the deal and the soft law matter doesn’t especially come up. They are blind to the soft law ramifications. I would dare say that a reckoning will eventually arise, but not customarily during the M&A process (unless an outside agency starts to ask pointed questions).

The remaining square of the four squares entails the circumstance of the being acquired firm having devout soft law beliefs that are presumably under the radar of the acquiring firm. The acquirer might care and openly welcome absorbing those beliefs or might not care and will just let them silently enter into the deal. There is always the hidden hat trick potentially at play, whereby the acquiring firm pretends not to care or maybe appears to acquiesce to the soft law demands, and then after the acquisition is completed, will find a covert means to subvert or undercut those beliefs (often leading to the founder or founders opting to move on).

As noted in the paper, the bargaining power of each party is a vital conditional element in how the soft law aspects play out. Nowadays, with many firms having established AI Ethics Boards, the respective boards or committees tasked with AI ethics and soft law considerations will frequently enter into the fray on these M&A confabulations. For my analysis of AI Ethics Boards as an organizational AI soft law mechanism, see the link here.

  • 5. The fifth paper of the event was jointly authored by Wendell Wallach (presenting), Carnegie Center, Ann-Katrin Reuel (presenting), Stanford University, and Anja Kaspersen, Director for Global Markets Development, New Frontiers and Emerging Spaces at IEEE, in a paper entitled “Given the Absence of Hard Law, the Roles for Soft Law Functions in the International Governance of AI.”

This paper highlights again the importance of soft law when it comes to the governance of AI. In addition, the authors tackle a now visibly discussed question of whether there ought to be some form of overarching global AI governance body that would engage in these matters on a multinational basis. I’ve been examining these considerations again and again in my column and can earnestly attest to the difficulties and rocky road toward such a fruitful ambition, see my coverage at the link here and the link here, plus the link here, for example.

Akin to some related propositions, the paper posits that a Global AI Observatory (GAIO) ought to be put together. This is but one step or puzzle piece in a set of five symbiotic components that they propose to be observed. Perhaps the most controversial of the five would be the suggestion that limited enforcement powers would be included, seeking to have a solid say in compliance with AI global standards for ethical and responsible AI.

For those debating the question of whether to have an overarching multinational entity governing AI, they would do well to take a close look at the points made in this paper. We have a lot of hand-waving by some who aren’t taking the time to get into the weeds. The details are going to be essential. The authors walk through tradeoffs and considerations that must be closely aired and deliberated. Nice job.

  • 6. The sixth paper of the event was authored by Amanda Pustilnik, University of Maryland, and entitled “Wiki-c-BCI: A Proto-Soft Law Proposal, or, Private Industry as a Source of Soft Law in Consumer-Facing Brain-Computer Interfaces.”

In this paper, the workshop shifts back onto the neurotechnology topic and especially the BCI realm. There is an indication that consumer-facing BCI referred to as c-BCI deserves particular attention when it comes to soft law and hard law matters. Yes, indeed, this is a vital subcategory of BCI that merits specialized consideration on those fronts. An approach designated as private actor-led (PAL) soft law development is explored. A means of readily grasping the approach is provided by noting similarities to the MPA (motion picture association) rating system, along with other parallels.

When discussing ethical, legal, and social issues (ELSI), there are a number of rather thought-provoking concerns that emerge with the advent of c-BCI. Here’s a doozy for you. If a c-BCI could potentially read your mind, including capturing your thinking processes, would this be a capability that we want our government to be able to undertake? Some might say that well, this is nothing out of the ordinary and you can either consent to this or not. Others might be reviled by the possibility and insist that the government should never go that far, regardless of giving consent or not.

It is a hot potato.

There are lots of these hot potatoes when it comes to c-BCI. The paper proposes that a collaborative Wiki or living handbook of preliminary or said to be a proto form of soft law on c-BCI should be established. Neurotechnology industry stakeholders would engage in devising the Wiki handbook and the aim would be to surface and make readily plain the ELSI considerations. Keep your eyes open to see whether this bold proposal can hopefully take hold.

The final portion of the workshop moved into the third overall topic, medicine and healthcare. Benjamin Faveri, Research Fellow in AI Governance, Law, & Policy, ASU Center for LSI, provided a highly informative overview of AI soft law and AI hard law in the medical and healthcare domains.

  • 7. The seventh paper of the event was authored by Adam Thierer, R Street Institute, and entitled “Exploring Soft Law Governance Mechanisms’ Application to AI Governance in The United States Healthcare Industry.”

This paper provides a comprehensive look at AI in the digital health arena, particularly when it comes to medical devices. I’ve previously covered that we are faced with grand questions when it comes to the use of AI in medical devices, see the link here and the link here.

As a former top tech executive at a major home healthcare firm and a consultant to startups in the AI and medical devices niche, I know full well the problem that adding AI into medical devices imbues. In short, AI often is devised to “learn” or undertake successive data training while working in real-time, for which the results of those changes are not necessarily knowable or fully controllable in advance. The AI can veer beyond what the expectations of the AI augmentation were supposed to be. You might accept that variation in other circumstances, but not when life or death is literally on the line.

Some key recommendations for the FDA are that a greater reliance on transparency ought to be observed, along with the need for heightened AI explainability. The paper further suggests that there be expanded enforcement discretion and reliance on best practices. Included in the set of recommendations is the vital nature of pilot programs and sandboxing. There is also a call for expanded educational and literacy facets when it comes to healthcare, especially medical devices and AI. Finally, the desired ask encompasses that a more distributed approach should be considered when it comes to governance in these matters (this is detailed in the paper).

The paper overall is chockfull of valuable considerations and recommendations.

  • 8. The eighth paper of the event was authored by Toni Lorente, King’s College London, and entitled “Institutional Review Boards as Soft Governance Mechanisms of R&D: Governing the R&D of AI-based Medical Products.”

Institutional Review Boards (IRBs) have become a common staple of biomedical research that engages in the use of human subjects. Per the FDA website: “An Institutional Review Board is a group that has been formally designated to review and monitor biomedical research involving human subjects. In accordance with FDA regulations, an IRB has the authority to approve, require modifications in (to secure approval), or disapprove research. This group review serves an important role in the protection of the rights and welfare of human research subjects.”

The paper posits that IRBs are essential for soft law governance inclusion during the R&D stages of AI-based medical products. This is a logical postulation given that human subjects are ordinarily utilized when seeking to do research and development of AI-based medical devices. It would seem apparent that using IRBs would be advantageous.

Not everyone necessarily agrees. Some might argue that there is no need to involve IRBs in the R&D portion of the system development life cycle. This is too early, goes the argument. Wait until the AI-based medical product is forwarded into the productization stage. You are overburdening the R&D. A contrasting view and the vocal retort is that if you wait until further along, the horse is already out of the barn. The time to get things figured out regarding soft law and human subjects was in the upfront moments of the endeavor. You will either have to undercut the effort to rectify things or you will try to skip around life’s vital considerations by sticking with inadequate or insufficient decisions made at the earlier stages.

It is a conundrum.

The author examines the product development pipeline and addresses how AI and soft law enter into the picture. If you weren’t thinking about IRBs in this context, here’s your chance to do so.

  • 9. The ninth paper of the event was authored by Monica Lopez, Co-Founder and CEO of Cognitive Insights for Artificial Intelligence, and the paper was entitled “Revaluating Human Values for Patient Care in The Age Of AI.”

In prior days of AI advancement, there was said to have been a focus on being AI-centric, meaning that the attention by AI makers and AI developers was primarily on getting the AI to function. The human impacts were said to be secondary, or maybe not given any attention at all. Those pesky humans, who needs them anyway?

More recently, the AI field has shifted toward a realization that maybe things ought to be human-centric when it comes to AI (see my discussion at the link here). The idea is that we need to put people or human concerns at the center of any kind of AI design or development. Start with the humans. Keep humans in mind. Make sure to end with the humans still at the center.

I know it seems ridiculously simple, but many times a heads-down AI maker or AI developer can become so engaged in the AI technology that they lose sight of the human factor. This might be likened to seeing the trees but not the forest.

In this paper, a notable concern is raised that AI in healthcare at times has forgotten or forsaken the call for a human-centric mantra. AI can exhibit unsavory biases that adversely affect humans. AI can potentially violate privacy provisions such as HIPAA. As a former HIPAA officer for a major healthcare provider, and a tech executive developing and rolling out AI systems, I am at times sick to my stomach by what some healthcare providers are doing these days when it comes to devising and adopting AI.

Some AI healthcare systems attempt to consider human-centric considerations only in tiny portions of their overall system. This is the piecemeal approach. It won’t work. Most of these AI-infused systems have zillions of places where the AI can go awry or even do damage by design. A macroscopic forest for the trees’ viewpoint is needed.

The paper says as much by emphasizing that human values need to be delineated across the entire AI product development life cycle. Do not cherry-pick a snippet of the life cycle. A human-centered approach needs to go the entire gamut. The author dives into four crucial soft law precepts, namely the importance of fairness, integrity, resilience, and explainability.

Any firm developing AI for healthcare or adopting an AI-infused healthcare system had better get their head on straight and consider all the AI hard law and AI soft law considerations that arise. It is good for business. It is good for the quality of care for your patients. It might keep you out of the doghouse and avert lawsuits and possibly even criminal prosecutions. The point is that for lots of substantive and sensible reasons, being human-centric in your AI deliberations is a smart way to go.

Conclusion

I have hopefully whetted your appetite to seek out and learn more about AI soft law. If so, I have accomplished my heartfelt goal. Welcome to the club.

A final comment for now.

The famed English philosopher and jurist, Jeremy Bentham, made this poignant remark about lawyers and the law: “The power of the lawyer is in the uncertainty of the law.” I often cite this quote to emphasize that hard law is not necessarily as well-defined and definitive as might seemingly be the case. Those of us in the AI and law field are well aware that laws are semantically ambiguous, which means that laws are composed of words and that our words can be interpreted in a multitude of ways.

Soft laws are at times the missing glue or connectedness that can fill in the gaps of our hard laws. Soft laws can be quicker to market while hard laws are still being heatedly debated. Soft laws can provide greater context associated with hard laws. Hard laws are sometimes based on the insights and wordings arduously devised by those who have put together soft laws.

And so on.

AI soft laws and AI hard laws are like two peas in a pod. Do not ignore either one. They each have their respective place in this world.

Yes, I said it, AI soft laws must get their due respect, which is most certainly a worthwhile endeavor and aspirational ambition, which we ought to all rally around, respectively.

Source: https://www.forbes.com/sites/lanceeliot/2023/10/23/workshop-on-soft-law-governance-of-ai-applications-knocks-it-out-of-the-park-and-dazzles-with-keen-insights-on-ai-ethics-and-ai-legal-impacts/