Topline
A Georgia man has sued ChatGPT-maker OpenAI alleging the popular chatbot generated a fake legal summary accusing him of fraud and embezzlement through a phenomenon AI experts call “hallucination,” marking the first defamation suit against the creator of a generative AI tool.
Key Facts
According to Bloomberg Law, the case was filed in a Georgia state court by radio host Mark Walters, who alleges ChatGPT provided details of a fake complaint to a journalist who had sought details about a real, ongoing suit.
The real suit was filed by the Second Amendment Foundation against Washington’s Attorney General Bob Ferguson—a case in which Walters has no involvement.
Walters alleges the generative AI chatbot responded to his query about that real court case with the summary of a completely fictional case claiming the Second Amendment Foundation’s founder sued Walters for “defrauding and embezzling funds” from the organization.
Walters, who is the host of Armed America Radio, is neither involved in the Washington suit nor has he ever worked for the Second Amendment Foundation, the report added.
Forbes has reached out to OpenAI for comment on the lawsuit.
News Peg
The fake legal summary is likely the result of a relatively frequent problem with generative AI known as hallucinations. This happens when a language model generates completely false information without any warning, sometimes occurring in the middle of otherwise accurate text. The hallucinated content can appear convincing, as it may superficially resemble real information and may also include bogus citations and made-up sources. On its main page, ChatGPT includes a warning that it may “occasionally generate incorrect information” or “produce harmful instructions or biased content.” When asked what AI hallucination is, ChatGPT responded with a long description of the issue, ending with: “It is important to note that AI hallucinations are not actual perceptions experienced by the AI system itself…these hallucinations refer to the content generated by the AI system that may resemble human perceptions, but it is entirely generated by the AI’s computational processes.”
Key Background
Both OpenAI and their competitors like Google have acknowledged concerns about AI hallucinations, an issue that some experts have warned could aggravate the problem of online disinformation. While announcing its latest language learning model, GPT-4, in March this year OpenAI noted that it had “similar limitations” to earlier models. The company warned: “it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether).” Last month, OpenAI said it was working on a newer method of AI training that intends to tackle the issue of hallucinations.
Surprising Fact
Last month, a Manhattan lawyer courted controversy after using ChatGPT to generate a legal brief in a personal injury lawsuit and submitted it before the court. The AI-generated legal document, however, cited several cases which were not real.
Further Reading
OpenAI Hit With First Defamation Suit Over ChatGPT Hallucination (Bloomberg Law)
Source: https://www.forbes.com/sites/siladityaray/2023/06/08/openai-sued-for-defamation-after-chatgpt-generates-fake-complaint-accusing-man-of-embezzlement/