If you are using the widely and wildly popular AI app ChatGPT, or if you are planning on using it, keep in mind that a lawsuit might eventually be headed your way.
Here’s the deal.
Suppose you opt to make use of ChatGPT in the course of crafting or delivering a service of one kind or another. A lot of people are doing so these days. The idea is that ChatGPT can bolster your efficiency and make you more productive. This might mean less time required to get your job done. It might also mean that you can do more work than you had previously accomplished, hopefully earning you some extra money too.
For those of you unfamiliar with this hottest and latest AI app, ChatGPT is a headline-grabber that is extensively known for being able to produce fluent essays and carry on interactive dialogues, almost as though being undertaken by human hands. A person enters a written prompt, ChatGPT responds with a few sentences or an entire essay, and the resulting encounter seems eerily as though another person is chatting with you rather than an AI application. This is referred to as generative AI since it generates text or essays in response to text-entered prompts. ChatGPT is made by the firm OpenAI, a company that has become the darling of the AI industry and garners all manner of avid attention these days.
To get more details about how ChatGPT works, see my explanation at the link here. If you are interested in the successor to ChatGPT, coined GPT-4, see the discussion at the link here.
Generative AI is based on a complex computational algorithm that has been data trained on text from the Internet and admittedly can do some quite impressive pattern-matching to be able to perform a mathematical mimicry of human wording and natural language. Please realize that ChatGPT is not sentient. We don’t have sentient AI. Do not fall for those zany headlines and social media rantings suggesting otherwise.
With that lowdown, we can now get back to the notion of using ChatGPT as part of your work effort or devout usage in other ways.
Let’s say that someone comes forward and claims that your service or work effort has allegedly caused them harm.
They decide to sue you.
As an aside, none of us wishes or relishes that such a claim might arise. You might have been doing the best that you can do. You might feel entirely wronged as to the claim being made. Nonetheless, in this zany lawsuit mania world that we live in, there is always a viable chance that someone will contend they were harmed and seek to sue you accordingly. That’s the harsh reality.
The person suing you might realize that you do not have deep pockets, diminishing the odds of getting much dough out of you. They are going to ergo angle toward something that does have tons of bucks. Something that would be a juicy target for the lawsuit. Something that somehow pertains to your service and they contend contributed directly or indirectly to the alleged harm involved.
Well, naturally, they would seek to sue OpenAI (the maker of ChatGPT) and try to embroil them into the lawsuit too.
Yes, indeed, OpenAI has the big bucks. You have the small bucks. The person launching the lawsuit would ardently argue that your efforts were aided by ChatGPT and therefore both you and OpenAI ought to be on the hook. The posturing would be that were it not for ChatGPT, presumably, you would not have been able to give rise to the harm that you purportedly caused.
I’d bet that you’ve seen this overall scenario many times in the news. A person leveraging this or that service or product is sued for alleged harm they have caused. The lawsuit also names the maker of the underlying service or product. A cynic would say it is because the maker of the underlying contrivance has the deeper pockets. Others might say that the maker of the underlying element should rightfully be partially at fault. It is a fair and square justification to include the maker in the legal action too, they would fervently proclaim.
Your first thought might be that you could care less that OpenAI is also encompassed in the lawsuit.
Your worry is about you. Let OpenAI defend itself. OpenAI undoubtedly has legions of lawyers and bags of coinage to hire outside attorneys. They’ll do just fine. You, on the other hand, barely make your mortgage payments. Having to defend yourself in a potentially costly lawsuit is unnerving and potentially could cost you everything you’ve ever earned.
I am about to do the reveal here on this thorny topic, so prepare yourself.
This is a trigger warning.
You might not have looked closely at the licensing terms associated with your signing up to use ChatGPT. Most people don’t. They assume that the licensing is the usual legalese that is impenetrable. Plus, the assumption is that there is nothing in there that will be worthy of particular attention. Just the usual ramblings of arcane legal stuff.
Well, you might want to consider Section 7a of the existing licensing agreement as posted on the OpenAI website and associated with and encompassing your use of ChatGPT:
- “Section 7. Indemnification; Disclaimer of Warranties; Limitations on Liability: (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.”
In normal language, this generally suggests that if OpenAI gets sued for something you have done with their services or products such as ChatGPT, you are considered by them to be on the hook for “any claims, losses and expenses (including attorneys’ fees)” thereof.
Bottom line, you might have to cover your own legal expenses plus whatever financial hit you take from the lawsuit, and furthermore potentially cover the legal expenses and related financial hit that OpenAI incurs due to the lawsuit.
That ought to give you pause for thought.
A lot of pauses. A lot of thought.
This is the hidden double whammy that few realize sits silently within their ongoing use of ChatGPT.
To clarify, by and large, most of the online services that you use are likely to have a similar clause. This is not especially unusual or unique. This is shall we say customary. You probably didn’t realize that was the case. Nor have you probably ever found yourself actually in reality subject to the indemnification clause.
Good for you.
Lucky for you.
If you are using ChatGPT, you ought to be aware of this potential double exposure. Include this factor in your startup considerations if you are a budding entrepreneur. I would also urge that you should already have had an attorney advise you about your use of ChatGPT, assuming that you are using the AI app in some capacity that could be linked to something you might do and possibly give rise to a claimed harm.
I know what some of you are thinking. Since you didn’t know about the indemnification clause, you can seemingly escape it by declaring honestly and sincerely that you never knew it was there. Had you realized it was there, you would have acted differently. Nobody can expect you to be on your toes about something that you had no knowledge of. That’s plainly the only fair way to see things.
I have a notable quote for you.
Addison Mizner, the famous architect, said this about the law: “Ignorance of the law excuses no person from practicing it.”
Trying to weasel your way out of the matter by pleading ignorance is a long and bumpy road. Ask your attorney and see what they say about how far an ignorance of the law defense can get you. Usually, not very far.
In today’s column, I will walk you through some of the intricacies associated with indemnification clauses. This might be helpful to you as you are daily making use of ChatGPT or are considering doing so. Please do realize this is a legal topic of incredible depth and I am only covering the surface of what the matter entails. Your best bet, as indicated earlier, would be to always and right away make sure you have legal counsel that can advise you on these kinds of thorny legal matters.
You might be tempted to wait until a lawsuit comes your way to then seek legal advice. Sorry to say that this is the proverbial mistake of letting the horse out of the barn. You would be wiser and safer to line up your ducks beforehand. Be ready for the day that the storm brews. Do not get caught in the open and struck by bolts of lightning.
Into all of this comes a slew of AI Ethics and AI Law considerations.
There are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
Getting Blindsided From All Sides
I’ve previously covered the numerous banned uses of ChatGPT that OpenAI says you aren’t supposed to be doing, see the link here. If you are indeed making use of ChatGPT in any of those prohibited ways, you are already in dicey waters. Even if you aren’t using ChatGPT in those sour and dour ways, you can still be using the venerated AI app in seemingly fully legitimate ways and become the subject of a lawsuit by someone that believes you have caused them harm as a result of your ChatGPT use (or so they might claim).
Here are the banned uses in a nutshell and I provide descriptions of each in my column coverage:
- 1) Nothing Illegal
- 2) Not Any Child Exploitation
- 3) Not Hateful
- 4) No Malware
- 5) No Physical Harm
- 6) No Economic Harm
- 7) No Fraud
- 8) No Adult Content
- 9) No Political Campaigning Or Lobbying
- 10) No Privacy Intrusion
- 11) No Unauthorized Practice Of Law
- 12) No Unaided Financial Advice
- 13) No Improper Health Advice
- 14) Not For High-Risk Governing
- 15) Other Precautions
I suppose the best bet is to steer clear of the considered prohibited uses. That is one sensible step among others that I’ll mention herein.
When it comes to possibly getting sued as a result of your services or other efforts, and if those services or efforts are indirectly or directly shaped as a result of using ChatGPT, these are the circumstances you might regrettably face:
- You get sued. You alone are sued (let’s also assume that you are indirectly or directly making use of ChatGPT in some related way)
- You and OpenAI get sued. You are sued and OpenAI as the maker of ChatGPT is also sued too
- Only OpenAI gets sued. OpenAI as the maker of ChatGPT is sued, but you aren’t sued, and then OpenAI comes to you to cover their lawsuit costs as a result of the indemnification clause and an assertion that you spurred the lawsuit
- Other variations
One big question will be whether you actually are using ChatGPT and whether you are doing so in a manner that pertains to the services or work efforts that supposedly have led to the claimed harm. If you haven’t been using ChatGPT at all, this would seem to at least unleash you from the OpenAI facets, though there is bound to be murkiness tossed into those waters.
If you have been using ChatGPT, the next level of scrutiny would likely be whether the usage at all pertains to the services or work efforts that are alleged to have caused the harm.
According to the OpenAI website that indicates the licensing agreement associated with their products including ChatGPT, they indicate this about the services that they are providing to you (assuming you are making use of ChatGPT):
- “These Terms of Use apply when you use the services of OpenAI, L.L.C. or our affiliates, including our application programming interface, software, tools, developer services, data, documentation, and websites (“Services”). The Terms include our Service Terms, Sharing & Publication Policy, Usage Policies, and other documentation, guidelines, or policies we may provide in writing. By using our Services, you agree to these Terms.”
Furthermore, this is the same website depiction of the content that you are producing as a result of using ChatGPT:
- “Section 3. Content: (a) Your Content. You may provide input to the Services (‘Input’), and receive output generated and returned by the Services based on the Input (‘Output’). Input and Output are collectively ‘Content.’ As between the parties and to the extent permitted by applicable law, you own all Input. Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output. This means you can use Content for any purpose, including commercial purposes such as sale or publication, if you comply with these Terms. OpenAI may use Content to provide and maintain the Services, comply with applicable law, and enforce our policies. You are responsible for Content, including for ensuring that it does not violate any applicable law or these Terms.”
You might find of interest my analysis of those terms as a matter of whether or not the ChatGPT essays are potentially in violation of Intellectual Property (IP) laws, plus potentially a form of plagiarism, see my coverage at the link here.
As a reminder of the earlier cited Section 7a encompassing indemnification when you are making use of ChatGPT, here’s what that portion says:
- “Section 7. Indemnification; Disclaimer of Warranties; Limitations on Liability: (a) Indemnity. You will defend, indemnify, and hold harmless us, our affiliates, and our personnel, from and against any claims, losses, and expenses (including attorneys’ fees) arising from or relating to your use of the Services, including your Content, products or services you develop or offer in connection with the Services, and your breach of these Terms or violation of applicable law.”
Many people are not sure of what indemnification as a legal precept means or implies.
Generally, the notion of indemnification is that you are agreeing to aid in making whole someone else that has incurred a loss or damage, here’s a handy formalized definition:
- “Recompense for loss, damage, or injuries; restitution or reimbursement. An indemnity contract arises when one individual takes on the obligation to pay for any loss or damage that has been or might be incurred by another individual. The right to indemnity and the duty to indemnify ordinarily stem from a contractual agreement, which generally protects against liability, loss, or damage” (posted in The Free Dictionary, West’s Encyclopedia of American Law).
There are usually two parties involved in an indemnification clause, whereby one party, we’ll call this generically Party A (the person or entity seeking to be indemnified), wants to have assurances from another party, we’ll call this generically Party B (the person or entity that is undertaking the indemnification), that if something happens to Party A as a result of actions by Party B then the Party B will cover the loss of Party A accordingly.
Imagine that you are Party B, while OpenAI is Party A.
You decide to signup to use ChatGPT which is an AI app being provided by OpenAI. There is an indemnification clause in the licensing policy of ChatGPT. You, as Party B, have ostensibly agreed to indemnify Party A, OpenAI, upon proceeding to make use of ChatGPT.
The somewhat apparent instance consists of a lawsuit against you and your use of ChatGPT which also then includes OpenAI as part of the lawsuit. In turn, OpenAI comes to you to cover them about the lawsuit. They have invoked the indemnification clause.
Another avenue is that suppose OpenAI gets sued for something about ChatGPT, and for which it is claimed that your use of ChatGPT is integral to the contended harm that is the basis for the lawsuit. OpenAI, as Party A, can seek to invoke the indemnification clause and come to you for coverage of the associated costs and whatnot.
The American Bar Association (ABA), explains indemnification overall via this depiction:
- “Indemnification is the practice of guaranteeing a third party claim against your counterparty. Imagine that you have a contract with a staffing agency to supply temporary staff working on your property, and in the course of their assigned duties, one of those temps causes a third party to be injured. The injured third party sues you and the staffing agency and secures compensation for personal injuries. Both you and the vendor have financial liability in some proportion as a result, however your contract required the staffing agency to indemnify you for any third party claims that arose in the performance of the contract. This means that the staffing agency will take over the full liability for the damages—they have indemnified you for the loss” (an ABA posted article entitled “Negotiating Indemnity” by Taylor Brown, May 5, 2017).
A scenario of how this could potentially apply in the use case of ChatGPT is sketched in an insightful research article by Roee Sarel entitled “Restraining ChatGPT” and posted as a Working Paper that is forthcoming to be published in the UC Law SF Journal (formerly Hastings Law Journal).
The research article posits this realistic scenario. Jack, a lawyer, mistakenly relies on the output of ChatGPT for a legal matter and includes the text in a legal briefing filed with the court.
I’ve discussed in my columns at length the concern that people are bound to be lulled into assuming that ChatGPT output is correct. ChatGPT interactions and output is usually conveyed with a tone and sense of confidence that belies the truth that ChatGPT outputs can contain errors, biases, falsehoods, and so-called AI hallucinations. If you get comfortable using ChatGPT and perchance haven’t yet encountered those maladies, your strident belief that ChatGPT can do no wrong goes way up. Eventually, you let down your guard and just accept that the ChatGPT outputs are good to go. Sure, you might glance at the outputted essays, but if you are in a rush, you do so in a cursory way.
This can also happen to lawyers. My urging is that lawyers will need to especially stay on their guard when using ChatGPT, see my coverage at the link here and the link here, just to name a few. A lawyer can find that using ChatGPT or other generative AI is a big time saver, allowing the lawyer to get work done expeditiously and impress their clients accordingly. The line today is that lawyer’s using generative AI will outdo lawyers that aren’t using generative AI. The catch, perhaps, would be that you need to be cautious when using ChatGPT and double-check or triple-check whatever outputs are generated.
Let’s get back to Jack, our ChatGPT-using lawyer in the scenario of the research article.
Upon having filed a document with the court that contains ChatGPT outputs that turned out to be faulty, the research article makes this key point:
- “What is the harm of such a mistake? Obviously, Jack himself may suffer a reputational loss in case the judge reprimands him for misleading the court, which may even find its way into the court protocol. Of course, such harm is somewhat uninteresting, as it is anyway governed by the contractual relationship between Jack and ChatGPT through the terms of service. As of February 7, 2023, the terms of service indeed not only state that OpenAI (the creators of ChatGPT) bear no liability but that the users of the chatbot must indemnify OpenAI for any third-party claims” (ibid).
You can hopefully plainly see that the indemnification clause of OpenAI comes into the picture.
Here’s a further analysis contained in the research article on that facet:
- “Thus, the more interesting question is what precisely are these third-party claims. In the example of Jack the lawyer, his client may decide to add ChatGPT to a malpractice lawsuit, blaming not only the lawyer for negligence but also the AI itself for providing inaccurate results. As Jack’s client does not have a contractual relationship with OpenAI, such a lawsuit would likely be based on a tort claim that points to the client’s loss as the relevant harm” (ibid).
As noted, let’s assume that the client of Jack that was potentially harmed in some fashion as a result of the faulty filing with the court, turns around and sues Jack for legal malpractice. The client might also opt to sue OpenAI. If they sued OpenAI, presumably the AI maker would come to Jack and invoke the indemnification clause, seeking to have Jack cover the costs and whatnot to defend against the lawsuit of Jack’s client.
Quite a mess.
The research article asks an as yet unsettled question about whether it is even essentially fair that OpenAI ala ChatGPT is being included in such a lawsuit:
- “In the example of Jack the lawyer, the harm is concentrated with his client and occurs immediately, but the direct injurer is the lawyer and not ChatGPT. Hence, it is not obvious that the AI creators owe a duty of care to the client, who is only an indirect victim of misinformation delivered to the lawyer under the explicit contractual condition that the creator is not liable.”
It is all a legal hornet’s nest.
The type of hornet’s nest that you probably want to try and avoid getting ensnared in (well, are they any hornet’s nests that we do want to get entangled in, one might ask rhetorically).
Trying To Contend With The Dreadful Predicament
What are you to do about all of this?
First, make sure that you look at the licensing agreement for any software or AI app that you use, including all that fun and interesting online stuff that catches your attention. Is there an indemnification clause? If so, keep in mind what that can foretell.
Second, consider not using the AI app as one option, thus avoiding getting embroiled later on in a messy legal affair. Check around to see if a comparable AI app doesn’t have such a provision. All else being equal, likely choose that AI app over the other one. I can though tell you that the odds are that nearly all AI apps that are of a prominent nature are going to contain such a clause. You probably won’t find a comparable AI app that lacks it, but I certainly say it is worth looking around just in case.
Third, consult your attorney about how to best try to legally protect yourself if you are insistent that you must use ChatGPT or some other generative AI that contains such a clause. Better to be prepared. When a lawsuit comes, the shock might be lessened if you’ve done your homework beforehand. You can at least have been embarking on a path that will enable your attorney to perhaps find ways to bolster your defense.
Fourth, it is conceivable that the AI maker of ChatGPT or the maker of whatever other generative AI app that you are using might opt to not invoke the indemnification clause. There might be sensible reasons for the AI maker to not do so.
Consider this excerpt from an article entitled “Will Online Indemnification Agreements Be Enforced?” about whether online indemnification clauses are considered legally enforceable or not:
- “Most consumers in the United States purchase goods and services online. And most of them never read the ‘terms and conditions’ that are often embedded in their transactions. Yet, those unread terms can include important obligations regarding future disputes, such as provisions requiring the consumer to indemnify the vendor and hold it harmless. Are such online indemnification provisions enforceable? Little case law directly addresses this question. To be sure, courts have upheld some online indemnification provisions” (article by Julie Cilia in The Woman Advocate, Volume 21, Number 3, Spring 2016).
There is a slew of ways to try and undercut or vacate an indemnification clause, such as but not limited to:
- Jurisdictional dispute as to countermanding provisions at the federal versus state levels
- Consumer protection provisions that might apply
- Potentially vague and imprecise language of the clause
- Improper or legally defective language of the clause
- Lack of suitable constructive notice (the agreement is hidden or hard to find)
- Lack of mutual manifestation of assent (did both parties must have a meeting of the minds)
- Lack of specifically expressed assent (i.e., when the licensing is found via a hyperlink or browsewrap, versus the use of clickwrap where a user must click before they can proceed into using the app, or scroll wrap where you need to scroll and then click to affirm)
- Overdose of adhesion such as a take-it-or-leave-it and no negotiating allowed
- Provision wasn’t sufficiently triggered or was improperly invoked
- Unconscionable as to “blank cheque” or an uncapped onerous financial burden
- Negligence or failure on part of the service provider
- Etc.
There is a famous quote that lawyers tend to know by heart that sometimes the courts will consider such matters as seemingly unconscionable and so outrageously unfair as to ardently shock the judicial conscience (see for example the legal case of Song fi, Inc. v. Google Inc. & YouTube, LLC). Thus, there is a chance that you might find sympathy from the court and be relieved of the burden of an indemnification clause, though this is not at all an ironclad outcome and instead a veritable roll of the dice.
Conclusion
The excitement about ChatGPT has spurred many enterprising entrepreneurs to include in their serving offerings a modicum of usage of ChatGPT. The aim is to spur their services and garner more customers. In addition, there is a solid chance that they can provide the services at a lower cost or on a speedier basis due to using generative AI such as ChatGPT.
Using ChatGPT also provides a marketing angle. You can tout that you are using the red-hot and most popular generative AI of today. This is a bandwagon that gleans eyeballs and media interest.
In their haste to leap onto that bandwagon, few tend to give due consideration to the licensing agreements underlying various generative AI apps and are sadly exposing themselves to financial and reputational risks. They might not know it at the time. The dismay and shock will come later on, assuming that someone seeks to sue them and the AI maker for an alleged wrong.
Make sure that you have got your legal considerations figured out, upfront and on an ongoing basis too. There are lots of legal landmines waiting for you down the entrepreneurial path, including when using generative AI. Sound legal advice can aid in coping with those legal exposures.
A final remark for now on this topic.
There is an acclaimed legal joke by the one-liner comedian Steven Wright: “I busted a mirror and got seven years of bad luck, but my lawyer thinks they can get me five.”
It is hard to remain lighthearted when you get tangled in a legal dispute, but if you keep your eyes open and take advised legal precautions, hopefully, you’ll have less trauma if the mirror does crack or break.
Source: https://www.forbes.com/sites/lanceeliot/2023/04/10/you-might-be-alarmed-to-know-that-when-you-use-chatgpt-you-are-agreeing-to-indemnify-openai-and-could-be-on-the-hook-for-a-huge-legal-bill-warns-ai-ethics-and-ai-law/