Deep fake hoax false and ai manipulation social media on display. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles 3d illustration.
getty
A California judge dismissed a housing dispute case in September after discovering that plaintiffs had submitted what appeared to be an AI-generated deepfake of a real witness. The case may be among the first documented instances of fabricated synthetic media being passed off as authentic evidence in an American courtroom — and judges say the legal system is unprepared for what’s coming.
In Mendones v. Cushman & Wakefield, Alameda County Superior Court Judge Victoria Kolakowski noticed something wrong with a video exhibit. The witness’s voice was disjointed and monotone, her face fuzzy and emotionless. Every few seconds, she would twitch and repeat her expressions. The video claimed to feature a real person who had appeared in other, authentic evidence — but Exhibit 6C was a deepfake.
Kolakowski dismissed the case on September 9. The plaintiffs sought reconsideration, arguing the judge suspected but failed to prove the evidence was AI-generated. She denied their request in November.
The incident has alarmed judges who see it as a harbinger.
“I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real, something AI-generated, and it’s going to have real impacts on someone’s life,” Judge Stoney Hiljus, chair of Minnesota’s Judicial Branch AI Response Committee, told NBC News. Hiljus is currently surveying state judges to understand how often AI-generated evidence is appearing in their courtrooms.
The vulnerability is not hypothetical. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal, a leading advocate for judicial AI adoption who nonetheless worries about its risks, described the problem in personal terms. His wife could easily clone his voice using free or inexpensive software to fabricate a threatening message, he said. Any judge presented with such a recording would grant a restraining order.
“They will sign every single time,” Schlegel said. “So you lose your cat, dog, guns, house, you lose everything.”
Judge Erica Yew of California’s Santa Clara County Superior Court raised another concern: AI could corrupt traditionally reliable sources of evidence. Someone could generate a false vehicle title record and bring it to a county clerk’s office, she said. The clerk likely won’t have the expertise to verify it and will enter it into the official record. A litigant can then obtain a certified copy and present it in court.
“Now do I, as a judge, have to question a source of evidence that has traditionally been reliable?” Yew said. “We’re in a whole new frontier.”
Courts are beginning to respond, but slowly. The U.S. Judicial Conference’s Advisory Committee on Evidence Rules has proposed a new Federal Rule of Evidence 707, which would subject “machine-generated evidence” to the same admissibility standards as expert testimony. Under the proposed rule, AI-generated evidence would need to be based on sufficient facts, produced through reliable methods, and reflect a reliable application of those methods — the same Daubert framework applied to expert witnesses.
The rule is open for public comment through February 2026. But the rulemaking process moves at a pace ill-suited to rapidly evolving technology. According to retired federal Judge Paul Grimm, who helped draft one of the proposed amendments, it takes a minimum of three years for a new federal evidence rule to be adopted.
In the meantime, some states are acting independently. Louisiana’s Act 250, passed earlier this year, requires attorneys to exercise “reasonable diligence” to determine whether evidence they submit has been generated by AI.
“The courts can’t do it all by themselves,” Schlegel said. “When your client walks in the door and hands you 10 photographs, you should ask them questions. Where did you get these photographs? Did you take them on your phone or a camera?”
Detection technology offers limited help. Current tools designed to identify AI-generated content remain unreliable, with false positive rates that vary widely depending on the platform and content type. In the Mendones case, metadata analysis helped expose the fabrication — the video’s embedded data indicated it was captured on an iPhone 6, which lacked capabilities the plaintiffs’ story required. But such forensic tells grow harder to find as generation tools improve.
A small group of judges is working to raise awareness. The National Center for State Courts and Thomson Reuters Institute have created resources distinguishing “unacknowledged AI evidence” — deepfakes passed off as real — from “acknowledged AI evidence” like AI-generated accident reconstructions that all parties recognize as synthetic.
The Trump administration’s AI Action Plan, released in July, acknowledged the problem, calling for efforts to “combat synthetic media in the court system.”
But for now, the burden falls on judges who may lack the technical training to spot fabrications — and on a legal framework built on assumptions that no longer hold.
“Instead of trust but verify, we should be saying: Don’t trust and verify,” said Maura Grossman, a research professor at the University of Waterloo and practicing lawyer who has studied AI evidence issues.
The question facing courts is whether verification remains possible when the tools to detect fabrication are themselves unreliable, and when the consequences of failure range from fraudulent restraining orders to wrongful convictions.