Michael Cohen, who used to be Donald Trump’s lawyer, is now in the bigger legal scoop. An unexpected event has come to light that shows how courts unintentionally use content created by AI. Cohen recently admitted that he used citations from Google Bard, an AI chatbot, wrongly because he thought it was a better search engine. Cohen is expected to be a key witness in future cases against Trump. This careless use of content made by AI has raised questions about how reliable and true this kind of information is in the legal world.
This mix-up highlighted a big issue: relying blindly on AI in law can cause mistakes, making legal arguments shaky. It’s like using a super-fast but a sometimes confused helper.
Intended or Unintended Misuse of AI Content
In a court filing, Cohen admitted that he gave his lawyer, David Schwartz, court citations that he got from Google Bard, thinking wrongly that they were real. However, the fact that these citations were later added to official court papers raised red flags and led to discussions about how unchecked the authenticity of content made by AI is and what that means for legal systems.
Another lawyer, E. Danya Perry, later joined Cohen’s defense but clarified that she only stepped in after Schwartz filed the motion. Perry, having reviewed the document, couldn’t verify the cited case law’s existence, informing the court about this ethical concern.
Additionally, such an issue prompted Judge Jesse M. Furman’s scrutiny, leading to a revelation that the cited cases didn’t exist in relevant legal contexts. While Furman also ordered an explanation of how the motion referenced non-existent cases and Cohen’s involvement.
Hence, the situation carries implications for Cohen’s role as a witness in an upcoming case against Trump. Cohen’s defenders argue he didn’t engage in misconduct as he relied on his lawyer, unaware of the falsified citations.
AI in Legal Papers: A Growing Trend?
Cohen’s issue isn’t unique. Similar AI mistakes in legal files have occurred elsewhere, indicating a trend. AI can quickly gather data, but it’s not perfect. Lawyers must verify AI output to avoid these errors. A similar event occurred earlier this year when attorney Steven Schwartz was criticized for using AI-generated citations in court papers. The judge’s critique identified fake judicial rulings and incorrect references, raising worries about using AI for legal research.
However, the AI-generated citations in Cohen’s motion turned out to be irrelevant, unrelated to the plea for supervised release. This highlights a significant error in the filing and emphasizes the need for caution and verification when using AI in legal research to ensure accuracy in court submissions.
The lesson here? While AI is handy, it’s crucial to verify its info in legal stuff. Otherwise, it’s like relying on a super-fast but sometimes unreliable friend for critical tasks.
Source: https://coinpedia.org/news/biggest-ai-blunder-trumps-ex-lawyer-cites-tech-error-in-fake-legal-citations/