Celebrity deepfake scams using unauthorized digital replicas of stars like Brad Pitt have made headlines, but anyone can be a target.
Samir Hussein/WireImage
Viral explicit deepfakes of stars including Scarlett Johansson and Jacob Elordi spread across the internet like wildfire last year, sparking outrage and debate over the governance of AI and protections against unauthorized digital replicas. Though lawmakers have been pushing for regulation, the threats have only continued to grow. In January of 2025, a scammer using AI-generated images and video of Brad Pitt tricked a French woman into giving away $850,000, and more recently, a Los Angeles woman lost her life savings after scammers used AI to impersonate soap opera star Steve Burton in their video chats.
Hollywood is ripe for deepfake activity, but the disturbing reality is that everyone is at risk from these digital threats. For example, in the midst of a divorce—a time often fraught with intense emotions—the impact of AI-generated content can be devastating, potentially influencing child custody and asset distribution, and causing significant reputational damage, regardless of one’s fame.
Deepfakes Put Credibility on Trial
Deepfake technology is making it possible to fabricate evidence that can have a significant impact on divorce or custody cases.
Getty Images
The most important quality in a divorce is credibility—the assurance of the court that the judge can trust that what you and your attorneys are saying is true. Credibility is proven through testimony, documentary evidence, and demeanor in the courtroom. But never before has there been such an opportunity to destroy credibility, with deepfake technologies making it possible to fabricate evidence that can be highly prejudicial in a divorce or custody case.
Divorce deepfakes are no longer just a looming threat. In a UK custody case, a mother submitted manipulated audio that appeared to show the father of her child making violent threats. Only careful forensic analysis revealed the deception. Litigants elsewhere have tried to introduce forged bank records, falsified property valuations, or manufactured DocuSigned paperwork.
Damage to a party’s reputation as a result of the dissemination of a deepfake or false evidence can be profound. A fabricated voicemail suggesting abuse could easily tilt custody determinations, while forged financial documents might distort equitable distribution or support.
The Cost of Fighting Deepfakes
The reality is that defending against AI manipulation can be expensive. Forensic experts, metadata review, and courtroom challenges take time and money. This creates an uneven playing field, especially if one spouse lacks the financial resources to contest suspicious evidence. And since the courts themselves are still catching up to the consequences of AI being used maliciously in legal proceedings, judges are struggling to adapt as once-trusted evidence may now be entirely fake.
At present, one of the strongest lines of defense is hiring sophisticated legal counsel with access to digital forensic teams who can challenge deepfakes, forged signatures, or other forms of fabricated evidence. The ability to do so in real time, to prevent the well from being poisoned, is key. Attorneys and clients must be aware of the risks and existence of these technologies and act quickly to stop them before they sway the court.
Those who attempt to abuse AI in legal proceedings will also pay a steep price. Attempting to mislead a court with falsified evidence or fake case law is sanctionable conduct, and disseminating false or fabricated content can lead to defamation or fraud charges with the responsible party held responsible for the economic and reputational impact.
Digital Defenders Do Exist
Lawmakers are proposing bills to combat non-consensual deepfakes, but until then, it’s wise to hire counsel who can spot and defend against deepfakes in real-time.
getty
When deployed responsibly, AI does have a place in litigation. It is a tremendously powerful tool for analyzing documents, assisting with forensic accounting, and summarizing large amounts of data, among other uses. Attorneys are successfully using AI to uncover hidden assets, flag inconsistencies, and make sense of complex discovery, for example.
Lawmakers are beginning to respond with bills aimed at punishing the creation and spread of non-consensual deepfake content. But legislation takes time, and litigants cannot wait for the law to catch up with technology. Until it does, clients must be proactive: preserving original files, using authenticated platforms, and engaging counsel with the technical ability to separate fact from fiction.
In family law court, credibility is currency. Managing and resolving a divorce or custody matter is difficult in even the best of circumstances, but exponentially more so when people are willing to resort to unethical tactics that undermine integrity. AI-driven deception only raises the stakes, making it essential to have trusted counsel ready to defend what matters most: the truth.