Judges need education on AI and deepfake evidence.
getty
AI is showing up in courtrooms faster than most courtrooms have had a chance to adjust, and there are not yet many forums where judges can compare notes on it with each other. That is the gap the Judicial AI Consortium is filling.
JAIC is a judges-only forum launched in January 2026 by three sitting judges: Judge Scott U. Schlegel of the Louisiana Fifth Circuit Court of Appeal, U.S. Magistrate Judge Maritza Dominguez Braswell of the District of Colorado and Judge Xavier Rodriguez of the Western District of Texas.
It is a room where judges can compare notes on how AI is showing up in their chambers, what they are doing about it and what is working or not working. As of a March 2026 interview on the Texas Appellate Counsel podcast, about 200 judges had signed up.
That is a good start. It is not enough.
Why A Judges-Only AI Forum Is The Right Structure
Most judicial AI training has been built by people who are not judges. Vendors run demos. Bar associations host panels. Academics write papers. Digital forensics experts like me come in to explain the technology. All of that is necessary. Judges need outside help to get up to speed on how the models work, how the files are processed and what the evidence actually shows.
But the education is the input. The ruling is the output, and the ruling belongs to the judge. The buck stops on the bench. A ruling on an AI question becomes precedent, and nobody else in the courtroom is accountable for that ruling in the same way. Schlegel put the asymmetry plainly in his March interview. When a lawyer gets AI wrong, the lawyer gets sanctioned. When a judge gets AI wrong, the mistake can become law.
JAIC takes that asymmetry seriously. It is closed to non-judges on purpose. It is not publishing opinions or advocating for policy. It is giving judges a place to ask each other questions that would be awkward to ask from the bench or on a public panel. Schlegel described the premise as letting judges “ask stupid questions, talk to each other about how you’re using it, what you’re seeing out there.” That is exactly the right design for a room where the work is learning, not signaling.
The topics JAIC lists on its sign-up form are the right ones too. Drafting with AI. Legal research with AI. Case management. Ethics. And the one that matters most for the work I do day to day as a digital forensics expert: deepfakes and the evidentiary implications of AI.
The Evidence Landing In Courtrooms Is Not Waiting
The reason JAIC’s membership number needs to grow faster than it currently is has nothing to do with JAIC itself. It has to do with what is already walking into courtrooms.
AI-generated evidence is no longer a hypothetical on the horizon. A judge in Washington State held a Frye hearing in State v. Puloka and excluded AI-enhanced video because the enhancement algorithm was not generally accepted in the relevant scientific community. That ruling was the exception. Many courts are still treating AI-enhanced and AI-generated media the way they used to treat enlargements and photocopies. That is a wrong default, and it is producing rulings the trial bar is going to have to clean up on appeal.
The pipeline is continuous. Modern smartphones run a computational photography pipeline that makes choices about what to keep, what to discard and in some cases what to add, before the file is ever saved. Surveillance footage is now routinely processed by AI noise reduction, AI upscaling and AI codecs. Voice-cloning audio is cheap and good enough to be offered as evidence in family court, civil disputes and criminal cases. Text messages can be fabricated by screenshot generators that do not leave the obvious tells they used to. An appellate court in Alberta quashed two murder convictions this January because the trial judge did his own frame-by-frame comparison of video evidence and found what he expected to find, which is exactly the failure mode a judge unprepared for AI-processed video is most likely to produce.
A judge seeing an AI-generated exhibit for the first time, on the bench, during testimony, with the clock running, is a bad place to start learning. JAIC exists so that does not have to happen.
About 200 Of Tens Of Thousands Is A Small Fraction
The National Center for State Courts counts roughly 30,000 state court judges, and another 1,700 judges sit on the federal bench. JAIC’s 200 members represent a fraction of one percent of the pool it invites. The number needs to climb quickly, because the gap between what a judge who has spent time inside JAIC knows and what a judge who has never thought hard about AI evidence knows is going to start producing different outcomes for similarly situated parties.
Two defendants charged with the same offense should not get different treatment because one of them drew a judge who has attended a JAIC discussion on deepfake audio and the other drew a judge who dismissed generative AI or deepfake audio because the topic was covered with a single CLE slide in 2024.
Federal and state judicial education bodies are building broader resources. The National Center for State Courts published a guide for judges on AI-generated evidence, and the Federal Judicial Center and the NCSC have resources linked from JAIC’s own resources page. Those are useful. They are not the same thing as a room where a judge in Colorado can ask a judge in Texas how she handled the first time an attorney offered an AI-generated reconstruction as demonstrative evidence.
What Judges Reading This Should Actually Do
If you are a sitting judge at any level, state or federal, and you have not thought much about AI yet, the step that makes sense is the one JAIC asks for. Fill out the form. Attend a Pop-In Discussion. Read Schlegel’s “AI in Chambers” framework. None of that costs money. None of it requires a public commitment. It is a room where you can listen before you speak.
If you are a chief judge or a presiding judge, the step that makes sense is pointing your bench at JAIC as a starting point and building from there. The downstream repercussions of a judge in your court admitting a deepfake video without recognizing it could be avoided with a half-hour Pop-In Discussion about deepfake indicators.
If you are an attorney, the step that makes sense is assuming the judges in front of you may not have had time to sort this out yet. Brief your AI-related evidentiary objections as if the bench is hearing the argument fresh. Cite the frameworks. Lay the foundation for authentication. Do not assume that because a video looks real on playback, it will be received without objection. That assumption is what produces the bad rulings everyone will have to live with afterward.
JAIC is doing the right thing in the right way. The reason to say so is not that the group needs applause. It is that the work it is doing is a public good, and public goods scale only when more people decide to show up for them. About 200 judges have shown up. The rest of the bench, tens of thousands of state and federal judges, has the same invitation.