Generative AI Is Stoking Medical Malpractice Concerns For Medical Doctors In These Unexpected Ways, Says AI Ethics And AI Law

In today’s column, I will be examining how the latest in generative AI is stoking medical malpractice concerns for medical doctors, doing so in perhaps unexpected or surprising ways. We all pretty much realize that medical doctors need to know about medicine, and it turns out that they also need to know about or at least be sufficiently aware of the intertwining of AI and the law during their illustrious medical careers.

Here’s why.

Over the course of a medical doctor’s career, they are abundantly likely to face at least one or more medical malpractice lawsuits. This is something that few doctors probably give much thought to when first pursuing a career in medicine. Yet, when a medical malpractice suit is inevitably brought against them, the occurrence can be of a cataclysmic impact on their perspective on medicine and a stupefying emotional roller coaster in their life and their livelihood.

A somewhat staggering statistic showcases the frequency and magnitude of medical malpractice lawsuits in the U.S.:

  • “Medical malpractice litigation is all too common in the United States, with an estimated 17,000 medical lawsuits filed annually, resulting in approximately $4 billion in yearly payments and expenditures” (source: “Hip & Knee Are the Most Litigated Orthopaedic Cases: A Nationwide 5-Year Analysis of Medical Malpractice Claims” by Nicholas Sauder, Ahmed Emara, Pedro Rull an, Robert Molloy, Viktor Krebs, and Nicolas Piuzzi, The Journal of Arthroplasty, November 2022).

The fact that 17,000 medical malpractice lawsuits are filed each year might not seem like a lot, given that there are approximately 1 million medical doctors in the USA and thus this amounts to just around 2% getting sued per year, but you need to consider that this happens year after year. It all adds up. Basically, over a ten-year period that would amount to around 20% of medical doctors getting sued (assuming we smooth out repeated instances). While over a 40-year-long medical career, the odds would seemingly rise to around 80% (using the same assumptions).

A research study that widely examined medical malpractice lawsuits in the U.S. made these salient points about the chances of a medical doctor experiencing such a suit and also clarified what a medical malpractice lawsuit consists of:

  • “A study published in The New England Journal of Medicine estimated that by the age of 65 years, 75% of physicians in low-risk specialties would experience a malpractice claim, rising to 99% of physicians in high-risk specialties.
  • “Medical malpractice claims are based on the legal theory of negligence. To be successful before a judge or jury in a malpractice case, the patient-plaintiff must show by a preponderance of the evidence (it is more likely than not, i.e., there is a >50% probability that professional negligence did occur based on the evidence presented) the physician-defendant had a duty to the patient to render non-negligent care; breached that duty by providing negligent care; this breach proximately caused the injury or damage; And the patient suffered injury or damages” (source: “Understanding Medical Malpractice Lawsuits” by Bryan Liang, James Maroulis, and Tim Mackey, American Heart Association, Stroke, March 2023).

If you were to place each medical malpractice lawsuit into its relevant categories of the claimed basis for the litigation, you would see something like this as falling into these groupings (note that each case can be listed in more than just one category):

  • Estimated 31% of medical malpractice cases: Delayed diagnosis and/or failure to properly diagnose.
  • Estimated 29% of medical malpractice cases: Devised treatment gives rise to adverse complications.
  • Estimated 26% of medical malpractice cases: Adverse outcomes arise that lead to worsening medical conditions.
  • Estimated 16% of medical malpractice cases: Delay in timely treatment and/or failure to sufficiently treat.
  • Estimated 13% of medical malpractice cases: Wrongful death.
  • Other Various Reasons: Medication errors, improper documentation, lack of suitable informed consent, etc.

We will explore how each of those categories relates to the use of generative AI by a medical doctor.

Before doing so, it might be worthwhile to consider the grueling gauntlet associated with a medical malpractice lawsuit.

Generally, a patient or others associated with the patient are likely to indicate to the medical doctor that are considering a formal filing concerning the perceived adverse medical care provided by that medical doctor (in some instances, this might instead appear out of the blue). The hint or suggestion can then lead to a filing of legal pleadings and the official initiation of the medical malpractice lawsuit.

A medical doctor would then have a series of meetings with their legal counsel and likely their malpractice medical insurer, plus others in their medical care circle or sphere. At some point, assuming the case continues, a pleading judgment would be rendered by the court. If the case further continues then there would be a period of evidentiary discovery associated with the matter, a trial, and depending upon the outcome a chance of appeal might be undertaken too.

Throughout that lengthy process, a medical doctor is usually still fully underway in their medical endeavors. They need to simultaneously cope with their already overloaded medical workload and provide ongoing and ostensibly disruptive attention and energy toward the medical malpractice lawsuit. Their every thought and action associated with the medical case in dispute will be closely scrutinized and meticulously questioned. This can be jarring for medical doctors that are not used to being openly challenged in an especially antagonistic adversarial manner (versus a perhaps day-to-day normal collegial style).

Given the above background, let’s next take a look at how generative AI fits into this picture.

Generative AI In The Realm Of Medical Doctor Advisement

I’d guess that you already know that generative AI is the latest and hottest form of AI. There are various kinds of generative AI, such as AI apps that are text-to-text or text-to-essay in their generative capacity (meaning that you enter text, and the AI app generates text in response to your entry), while others are text-to-video or text-to-image in their capabilities. As I have predicted in prior columns, we are heading toward generative AI that is fully multi-modal and incorporates features for doing text-to-anything or as insiders proclaim text-to-X, see my coverage at the link here.

In terms of text-to-text generative AI, you’ve likely used or almost certainly heard about ChatGPT by AI maker OpenAI which allows entry of a text prompt and the AI generates an essay or interactive dialogue in response. For my elaboration on how this works see the link here. The usual approach to using ChatGPT or other similar generative AI is to engage in an interactive dialogue or conversation with the AI. Doing so is admittedly a bit amazing and at times startling at the seemingly fluent nature of those AI-fostered discussions that can occur.

Please know though that this AI and indeed no other AI is currently sentient. Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language.

Into all of this comes a plethora of AI Ethics and AI Law considerations.

There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

A medical doctor is likely to be especially intrigued by generative AI.

A lot of publicity in the medical community seemed to arise when a study earlier this year proclaimed that generative AI such as ChatGPT was able to pass the written test known as the United States Medical Licensing Exam (USMLE) at a roughly 60% accuracy rate. Here’s what the researchers said:

  • “Artificial intelligence (AI) systems hold great promise to improve medical care and health outcomes. As such, it is crucial to ensure that the development of clinical AI is guided by the principles of trust and explainability. Measuring AI medical knowledge in comparison to that of expert human clinicians is a critical first step in evaluating these qualities. To accomplish this, we evaluated the performance of ChatGPT, a language-based AI, on the United States Medical Licensing Exam (USMLE). The USMLE is a set of three standardized tests of expert-level knowledge, which are required for medical licensure in the United States. We found that ChatGPT performed at or near the passing threshold of 60% accuracy. Being the first to achieve this benchmark, this marks a notable milestone in AI maturation. Impressively, ChatGPT was able to achieve this result without specialized input from human trainers” (source: “Performance of ChatGPT On USMLE: Potential For AI-assisted Medical Education Using Large Language Models” by Tiffany Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, and Victor Tseng, PLOS Digital Health, February 9, 2023).

Medical doctors likely raised their eyebrows at the fact that generative AI can seemingly pass an arduous standardized medical exam.

Rather obvious questions immediately come to mind:

  • Does this suggest that generative AI might be coming for my job, some doctors undoubtedly asked, namely AI that performs medical analyses and dispenses medical advice?
  • Am I going to be replaced by generative AI or will I instead be acting in conjunction with generative AI on my medical diagnoses and medical advisement?
  • Should I start looking into using generative AI right away and not wait until I am career-wise disrupted or caught off-guard?
  • What is the most sensible or prudent use of generative AI for medical work as a medical doctor?
  • Etc.

The American Medical Association (AMA) has promulgated a terminology that this type of AI ought to be referred to as augmented intelligence:

  • “The AMA House of Delegates uses the term augmented intelligence (AI) as a conceptualization of artificial intelligence that focuses on AI’s assistive role, emphasizing that its design enhances human intelligence rather than replaces it” (source: AMA website).

Let’s for the moment set aside the notion of an autonomous version of generative AI that functions entirely without any human medical doctor involvement. I’m not suggesting this isn’t in the future and only seeking to conveniently narrow the discussion herein to when generative AI is used in an assistive mode.

I’ve put together an extensive list of the benefits associated with a medical doctor opting to use generative AI. In addition, and of great importance, I have also assembled a list of the problems associated with a medical doctor using generative AI. We need to consider both the problems and downsides and weigh those against the benefits or upsides. To clarify, I could say that in the other direction too, namely that we need to consider the benefits or upsides in light of the problems or downsides.

Life seems to always be that way, involving calculated tradeoffs and ROIs.

I’ll explore the benefits first, just because it seems a more cheerful way to proceed. The problems or downsides will be explored next. Finally, after examining those two counterbalancing perspectives, we will jump into the medical malpractice specifics about the use of generative AI by a medical doctor.

Hang onto your hats for a bumpy ride.

Touted Benefits Of Generative AI Usage By Medical Doctors

Think of generative AI as being much different than merely doing an online search for medical info such as via a conventional web browser (note that the newest browsers are starting to encompass generative AI capabilities, see my coverage at the link here). A traditional web browser will bring back tons of hits that you need to battle through. Some of the found instances will be useful, some will be useless. Worse still, some of the search engine findings might be wrought with misleading medical info or outrightly wrong medical info.

Generative AI is supposed to be an interactive dialogue-oriented experience. You interact with the generative AI. That being said, you can simply enter a prompt such as a patient profile, and ask the generative AI to do a medical analysis for a one-time emitted essay, but that’s not the productive way to use these AI apps. The full experience consists of going back and forth with the generative AI. For example, you enter a patient profile and ask for a diagnosis. The AI responds. You then question the diagnosis and ask further questions. It is supposed to be highly interactive.

Another angle for using generative AI would be for a medical doctor to enter a devised diagnosis and ask the AI app to critique or review the proposed advisement. This once again should proceed on an interactive basis. The generative AI might question whether you considered this or that medical facet. You respond. All in all, the aim is to have a kind of double-check or at least a means to bounce ideas around to see whether you have exhaustively considered multiple possibilities.

Here are five major ways that I usually suggest medical doctors make use of generative AI, assuming they are interested in doing so:

  • 1) Medical brainstorming: Use generative AI to kick around medical ideas and get outside of your own medical mental in-the-box constraints
  • 2) Drafting medical content: Use generative AI to produce medical content for filling in forms or preparing needed medical documents
  • 3) Reviewing medical scenarios: Use generative AI to assess and comment on medical propositions or scenarios
  • 4) Summarizing medical narratives: Use generative AI to readily examine and summarize dense or lengthy medical content that you want to get the gist of
  • 5) Converting medical jargon into plain language: Use generative AI to convert hefty medical jargon into plain language that can be conveyed to patients or patient families

There are numerous other uses of generative AI for medical doctors. I’m merely noting the seemingly more common uses and ones that can be done with relative ease.

You are now primed for my list of beneficial uses of generative AI for medical doctors in the boundaries of medical decision-making and medical decision support:

  • Benefit that the generative AI can potentially focus on the particulars of a given patient and thus be far more applicable and specific than broader medical info available online.
  • Benefit that the generative AI might be more well-rounded in medical facets than seeking advice from a particular medical colleague of a narrow specialty.
  • Benefit is that the generative AI might be more detailed and pinpointed to deep medical specifics than seeking the advice of a medical colleague of a broader capacity.
  • Benefit that the generative AI is available 24×7 with no delay in access versus seeking advice from a busy or unavailable colleague.
  • Benefit is that the generative AI might be updated with the latest in medical content and be ahead of where a medical doctor presently is familiar with the state-of-the-art in medicine.
  • Benefit is that the generated indications can be readily digitally stored and later retrieved when needed versus verbal conversations with colleagues that are later subject to hindsight interpretation.
  • Benefit that generative AI can bring together vast troves of disparate medical info and consolidate and select for a particular case at hand.
  • Benefit is that generative AI can aid in filling out needed medical forms and medical documentation, reducing the paperwork time and energy consumption typically required of a medical doctor.
  • Benefit is that generative AI can serve as a sounding board to perform medical scenario analyses and aid in ascertaining the most advisable medical path.
  • Benefit is that generative AI can be a brainstorming tool to inspire out-of-the-box medical considerations that a medical doctor might otherwise not have considered.
  • Benefit is that generative AI can do a first-pass review of a proposed medical diagnosis or tentative medical decision and provide valuable food for thought to the medical doctor.
  • Benefit is that the generative AI can serve as a learning aid to enable a medical doctor to get quickly up-to-speed on needed medical matters.
  • Benefit is that the generative AI might detect and alert a medical doctor to their own potential medical errors and omissions.
  • Benefit that the generative AI might discern obscure or extraordinary medical circumstances as though a Dr. House in-a-box amplifier that otherwise might have been skipped or unnoticed.
  • Benefit is that if called upon to explain a medical decision that a medical doctor might refer to the generative AI when discussing medical matters with patients and their families, doing so as a means of reassuring them about the validity of the medical decisions made.
  • Benefit that patients and patient families will potentially use generative AI to try and understand the medical facets being undertaken by a medical doctor and ergo reduce the usurping of the time by the medical doctor to explain the medical underpinnings.
  • Benefit is that the generative AI might do a better job at explaining medical matters than a medical doctor and provide a secondary added bedside complementary function for the medical doctor.
  • Benefit is that the generative AI might be inspirational for a medical doctor to leverage the latest in high-tech for seeking the best medical care for their patients.
  • Benefit that if faced with a medical malpractice lawsuit that the medical doctor might be able to bolster their medical stance by referring to the use of generative AI as an additional tool showcasing the extent and depth of the medical decision-making process.
  • Other benefits

I snuck into that foregoing list an indication about potentially using generative AI as a means of later bolstering your position during a medical malpractice lawsuit.

Let’s revisit my earlier indication about the categories associated with medical malpractice lawsuits and consider how generative AI might have been able to avoid or overcome the noted lamentable outcomes:

  • Delayed diagnosis and/or failure to properly diagnose: Use of generative AI might have sped up the time needed to do the diagnosis and/or might have guided or double-checked the medical doctor toward a proper diagnosis, thus averting the adverse outcome.
  • Devised treatment gives rise to adverse complications: Use of generative AI might have forewarned the medical doctor about adverse complications that could arise due to the treatment and that weren’t otherwise foreseen or failed to be conveyed to the patient.
  • Adverse outcomes arise that lead to worsening medical conditions: Use of generative AI might have identified or noted the worsening medical conditions on a trending basis that the medical doctor might otherwise not have readily ascertained.
  • Delay in timely treatment and/or failure to sufficiently treat: Use of generative AI might provide a sense of needed timing for treatment and/or might note that sufficient treatment is not seemingly taking place.
  • Other benefits

All in all, those benefits assuredly seem quite convincing.

How would any medical doctor not be using generative AI, given the litany of benefits listed?

We next turn toward the set of problems associated with using generative AI by medical doctors. This will aid us in weighing the upsides versus the downsides.

Touted Downsides Of Generative AI Usage By Medical Doctors

I am going to present to you a slew of potential downsides or problems associated with using generative AI by medical doctors.

Pundits that believe wholeheartedly in the use of generative AI by medical doctors will have a bit of heartburn when they see the list. They will almost certainly object that many of the downsides or listed problems can be overcome. To some extent, yes, that is true.

We also need to acknowledge that the benefits that I just listed are also readily undermined or attacked too. For each of the benefits that I listed, you can easily find ways to undercut the stated benefit. Some of those benefits might seem to be the proverbial pie-in-the-sky. They might happen, though the odds of the benefit arising are scarce as hen’s teeth, some would insist.

Fair is fair.

Moving into the potential downsides, let’s take a look at one notable use case, and then we’ll see the entire list. One of the biggest problems or downsides of today’s generative AI is that it is well-known that these AI apps can produce errors, falsehoods, be biased, and even wildly make-up things in what are considered AI hallucinations (a terminology that I disfavor, for the reasons stated at the link here).

Imagine then this scenario. A medical doctor is using generative AI for medical analysis purposes. A patient profile is entered. The medical doctor has done this many times before and has regularly found generative AI to be quite useful in this regard. The generative AI has provided helpful insights and been on-target with what the medical doctor had in mind.

So far, so good.

In this instance, the medical doctor is in a bit of a rush. Lots of activities are on their plate. The generative AI returns an analysis that looks pretty good at first glance. Given that the generative AI has been seemingly correct many times before and given that the analysis generally comports with what the medical doctor already had in mind, the generative AI interaction “convinces” the medical doctor to proceed accordingly.

Turns out that unfortunately, the generative AI produced an error in the emitted analysis. Furthermore, the analysis was based on a bias associated with the prior data training of the AI app. Scanned medical studies and medical content that had been used for pattern-matching were shaped around a particular profile of patient demographics. This particular patient is outside of those demographics.

The upshot is that the generative AI might have incorrectly advised the medical doctor. The medical doctor might have been lulled into assuming that the generative AI was relatively infallible due to the prior repeated uses that all went well. And since the medical doctor was in a rush, it was easier to simply get a confirmation from the generative AI, rather than having to dig into whether a mental shortcut by the medical doctor was taking place.

In short, it is all too easy to fall into a mental trap of assuming that the generative AI is performing on par with a human medical advisor, a dangerous and endangering anthropomorphizing of the AI. This can happen through a step-by-step lulling process. The AI app also is likely to be portraying the essays or interactions in a highly poised and confidently worded fashion. This is also bound to sway the medical doctor, especially if under a rush to proceed.

Take a deep breath and take a gander at this list of potential pitfalls and problems when generative AI is used by a medical doctor:

  • Problem of generative AI errors, biases, falsehoods, and AI hallucinations that could mislead or confound whatever medical advisement or essay is being generated for use.
  • Problem of lack of producible or cited documented supporting references for the generated essays and interactive dialoguing of generative AI.
  • Problem of cited documented supporting references that are AI hallucinations or otherwise do not exist and yet are portrayed as factual and real.
  • Problem is that generic generative AI is data-trained generally on the Internet and not to the specifics of medical content.
  • Problem is that medical content scanned during the Internet training might not be of a bona fide medically sound nature.
  • Problem is that the generative AI might be frozen in time and not have scanned the latest in medical content available on the Internet.
  • Problem is that the medically scanned Internet content might be from narrow sources or fail to encompass a wide enough range of bona fide medical materials.
  • Problem is that the scanned bona fide medical materials of the Internet might be improperly pattern-matched as to overstating or understating what the medical content imbues.
  • Problem is that the generative AI is solely a mathematical and computational pattern-matching of existing writing on medical matters and is not sentient and has no semblance of common sense, human understanding, etc.
  • Problem is that the generative AI is not tailored to the specifics of medical diagnoses and medical decision-making and in a sense is out of its league when it comes to the medical domain.
  • Problem is that a medical doctor using generative AI needs to adequately and sensibly use the generative AI such as via so-called prompt design or prompt engineering else the effort might inadvertently become counterproductive.
  • Problem is that generative AI functions on a probabilistic basis and the essays and interactive dialogue are likely to change and not be repeatable or reliably consistent.
  • Problem is that the generative AI has not likely been subjected to medical peer review or other measurements to ensure medical accuracy.
  • Problem is that the context online storage limitations of the generative AI might subtlety and without notification shortchange the medical analysis that is being conveyed or discussed.
  • Problem is that a medical doctor might be lulled into assuming that the generative AI is correct and ergo overly rely misleadingly upon the essay or interactive dialogue.
  • Problem is that a medical doctor in a hurried or overworked mindset might fail to sufficiently double-check the generative AI-emitted medical indications.
  • Problem is that the entry of patient-related information by a medical doctor into generative AI might be a privacy intrusion and a violation of HIPAA.
  • Problem is that the entry of patient-related information into generative AI might be onerous to undertake and become another paperwork time-consuming drain for medical doctors.
  • Problem is that a medical doctor might be forcibly required to use generative AI in a hospital or medical setting even if the usage is potentially time-draining or counterproductive.
  • Problem is that this use of generative AI is considered potential life-or-death and therefore abundantly risky and well-beyond what the AI maker devised or intended.
  • Problem is that the use of generative for medical decision-making violates software licensing stipulations of the AI maker and puts the medical doctor and medical provider in a tenuous legal posture.
  • Problem is that if called upon to explain a medical decision that a medical doctor might refer to the generative AI as though it was a coherent medical advisor and upset patients and patient families as to a lack of suitable medical human judgment involved.
  • Problem is that patients and patient families will potentially use generative AI to try and second-guess a medical doctor and raise concerns that are based on faulty considerations.
  • Problem is that the use of generative AI by a medical doctor can open up new avenues of medical malpractice and enter into an untested medical-legal realm that is murky and nascent.
  • Other problems

I’ll highlight a few of those points.

The use of generative AI for private or confidential information is something that you need to be especially cautious about. Entering patient-specific info could be a violation of HIPAA (Health Insurance Portability and Accountability Act) and lead to various legal troubles. For more on how generative AI is potentially lacking in privacy and cybersecurity, see my coverage at the link here.

Another issue is whether generative AI is allowed to be used for medical purposes, to begin with. Some of the software licensing agreements explicitly state that medical professional use is not allowed. This once again can raise legal issues. See my discussion about prohibited uses of generative AI at the link here.

Each of the problematic or downside points in the list above is worthy of a lengthy elaboration about what they are and how they can be overcome. I don’t have space to cover this in today’s column, but if there is sufficient reader interest I’ll gladly go into more depth in later columns.

The Medical Malpractice Dual-Edged Sword Of Generative AI Use

I will finish up this discussion by noting the dual-edged sword of generative AI use in the medical domain and how this relates to medical malpractice considerations.

First, a recent paper posted in the Journal of the American Medical Association (JAMA) identified various key facets of medical malpractice associated with generative AI:

  • “The potential for large language models (LLMs) such as ChatGPT, Bard, and many others to support or replace humans in a range of areas is now clear—and medical decisions are no exception. This has sharpened a perennial medicolegal question: How can physicians incorporate promising new technologies into their practice without increasing liability risk?”
  • “The answer lawyers often give is that physicians should use LLMs to augment, not replace, their professional judgment. Physicians might be forgiven for finding such advice unhelpful. No competent physician would blindly follow model output. But what exactly does it mean to augment clinical judgment in a legally defensible fashion?” (source: “ChatGPT And Physicians’ Malpractice Risk” by Michelle M. Mello and Neel Guha, JAMA Health Forum, May 18, 2023)

The noted emphasis was on how to incorporate generative AI into a medical doctor’s practice without increasing liability risk. A vital recommendation is that medical doctor needs to realize that they cannot and should not blindly abide by whatever the generative AI emits. This though, as noted, would generally be something that a medical doctor would likely already assume to be the case.

The devil is in the details.

A day-to-day use of generative AI is a lot different than a once-in-a-blue-moon usage. There is a tendency in day-to-day routinization to become complacent and fall into the mental trap of being less skeptical about what the generative AI is producing. The list of problems or downsides that I’ve shown earlier is a sound basis for being cautious about whether to adopt generative AI or not.

The authors also provided this recap of their overarching viewpoint on the matter:

  • “The rapid pace of computer science means that every day brings an improved understanding of how to harness LLMs to perform useful tasks. We share in the general optimism that these models will improve the work lives of physicians and patient care. As with other emerging technologies, physicians and other health professionals should actively monitor developments in their field and prepare for a future in which LLMs are integrated into their practice” (ibid).

We need to also consider what medical malpractice lawyers are going to do in response to the advent of generative AI for use by medical doctors.

Here’s what I mean.

One cogent legal argument is that the use of generative AI demonstrably caused an undue increase in risk associated with the performance of a medical doctor. That’s an obvious line of attack. If a medical doctor relied upon generative AI, an assertion can be made that they are expressly embodying a heightened risk due to the slew of downsides or problems that I’ve listed herein.

Let’s turn that same argument around.

Suppose a medical doctor did not make use of generative AI. This would at first glance seem clearly to be the safest means to avoid any complications about how generative AI entered into a malpractice setting. You didn’t use generative AI so it cannot seemingly be an issue at hand. Period, end of story.

A counterargument would be that if the medical doctor had in fact made overt use of generative AI, the medical doctor might not have made the malpractice failure that they are alleged to have made. Per the benefits listed earlier about generative AI, it is conceivable that the generative AI would have nudged or pushed the medical doctor to not have done whatever faltering act they supposedly did.

That is a mind-bending conundrum.

Is it best to avoid professional negligence in a medical doctor setting by avoiding generative AI altogether, or could this become a contentious issue that if generative AI had been used then the professional negligence would (arguably) not have occurred?

The arising expectation or pressing argument might be that medical doctors should be taking advantage of viably available and useful tools including generative AI in their medical practice efforts. Failing to keep up with a tool that could make a substantive difference in performing medical work would, or could, be portrayed as a lack of attention to modern medical practices. A veritable head-in-the-sand claimed argument might be somewhat of a stretch in today’s wobbly status of generative AI, but as generative AI gets more tuned and customized to medical domains, this would seem to loom larger on the docket.

A medical doctor might increase risk by adopting generative AI. On the other hand, they might be failing to mitigate risk by not adopting generative AI. Generative AI could be construed as a crucial risk management component for practicing modern medicine. Yes, in short, it could be argued with vigor that generative AI when used suitably could be said to decrease risk.

There you have it, a dual-edged sword.

Conclusion

I offer a few concluding remarks on this engaging topic.

I would wager that just about everyone has heard of the Hippocratic Oath, namely the famed oath taken by medical doctors tracing back to the Greek doctor Hippocrates. This is a longstanding and oft-quoted dictum. The particular catchphrase of “First do no harm” is associated with the Hippocratic Oath, meaning that medical doctor is obligating themselves to stridently seek to help their patients and assiduously do what they can to avoid harming their patients.

You might say that we are on a precipice right now about generative AI fitting into the Hippocratic Oath.

Using generative AI can be argued as veering into the harming territory, while a counterargument is that the lack of using generative AI is where the harm actually resides. Quite a puzzle. Darned if you do, darned if you don’t. Right now, the darned if you do is tending to outweigh the darned if you don’t. This equation might gradually and eventually flip over to the other side of that coin.

I’d like to end this discussion on a lighter note, so let’s shift gears and consider a future consisting of sentient AI, also referred to as Artificial General Intelligence (AGI). Imagine that we somehow attain sentient AI. You might naturally assume that this AGI would potentially be able to take on the duties of being a medical doctor. It seems straightforward to speculate that this would occur (i.e. if you buy into the sentient AI existence possibility).

Mull over this deep thought.

Would we require sentient AI to take the Hippocratic Oath, and if so, what does this legally foretell as to holding the sentient AI responsible for its medical decisions and its devised performance as an esteemed medical doctor?

A fun bit of contemplative contrivance, well, until the day that we manage to reach sentient AI. Then, we’ll be knee-deep serious about the matter, for sure.

Source: https://www.forbes.com/sites/lanceeliot/2023/05/23/generative-ai-is-stoking-medical-malpractice-concerns-for-medical-doctors-in-these-unexpected-ways-says-ai-ethics-and-ai-law/