news

AI deepfakes are poised to enter court proceedings at time of low trust in legal system

Alexandra Robinson | AFP | Getty Images

A deepfake video of Russian President manipulated with artificial intelligence.

  • The potential for deepfakes in the courtroom has become not just plausible, but according to experts, likely.
  • The fear is that people will use deepfakes to create evidence to either provide alibis for activities or to try to prove the innocence or guilt of somebody.
  • In addition to the risk of altered evidence, streamlining court reporting with AI opens up the doors to alteration.
Rob Lever | AFP | Getty Images
A woman views a manipulated video of President Donald Trump and former president Barack Obama, illustrating how deepfake technology can deceive viewers.

Lawyers, judges and others in the legal profession are already using artificial intelligence and machine learning to streamline their workflows in and out of the courtroom. But what happens when that same AI is used for less than ethical means?

Amid technological evolutions like OpenAI's recent release of text-to-video generative AI software Sora, the potential for deepfakes in the courtroom has become not just plausible, but — according to experts — likely.

"The chances of someone abusing this technology today is likely already happening," said Jay Madheswaran, CEO and co-founder of AI legal case assistant Eve.

Sarah Thompson, chief product officer at BlueStar, a litigation services and technology company, fears that people will use deepfakes in criminal proceedings to "create evidence to either provide alibis for activities or to try to prove the innocence or guilt of somebody."

This is a threat to the judicial system around the world. In the U.S. in particular, every person is, at least at a surface level, subject to "legal standards and principles that are equally enforced rather than subjected to the personal whims of powerful corporations, individuals, governments or other entities," according to a whitepaper on AI cloning in legal proceedings from the National Court Reporters Association (NCRA).

At the very least, litigation requires an agreement upon a certain set of facts. "When we start calling into question what truth is," said Thompson, "this is where we're going to be running into a lot of issues."

The risk of alteration in the judicial process

In addition to the risk of altered evidence, streamlining court reporting with AI opens up the doors to alteration. "There's a lot of risk that the justice system is opening itself up to by not having someone that is certified to have care, custody and control," said Kristin Anderson, president of the National Court Reporters Association and official court reporter in the Judicial District Court of Denton County, Texas.

Traditional court reports take an oath of accuracy and impartiality, something that could be lost with AI without appropriate legislation. Melissa Buchman, a family law attorney in California, outlined a nightmare scenario in a column she wrote for the Los Angeles San Francisco Daily Journal, in which "entire chunks of testimony, including [...] descriptive statements of a horrible event that had transpired, were missing" due to an AI reporting error.

Even when the full recording is present, there's a major race gap in speech recognition. A Stanford University study found that error rates for Black speakers were nearly twice as high as those for white speakers.

To combat deepfakes, several states have already passed laws relating to AI-altered audio and video, but most of them have to do with deepfake pornography. California's bill criminalizing altered depictions of sexually explicit content was the first of its kind.

Still, legislation and regulations on digital evidence and court reporting aren't widely implemented yet. "We have a legislative body that tends to move slowly and is not necessarily well-versed on the technology that we're trying to legislate," said Thompson.

The judicial system will need to solidify processes on how to authenticate digital evidence, entering those processes into the Federal Rules of Evidence and the Federal Rules of Civil Procedure, Thompson added.

Challenge to 'gold standard' of audio, video evidence

In the meantime, Madheswaran says there are steps that can be taken now to combat the risk that deepfakes pose in the courtroom. "Historically, audio and video evidence are considered [the] gold standard," he said. "Everyone needs to have a bit more critical thinking in terms of how much weight to actually give to such evidence."

Judges can alert juries to the possibility of digitally falsified evidence during instructions and begin developing precedent based upon cases that involve deepfakes. "It will not stop people from using deepfakes, but at least there will be a pathway to some kind of justice," said Thompson.

Deepfake detection technology is in the works by institutions like MIT, Northwestern and even OpenAI itself, but the cat-and-mouse game of development versus detection will likely continue (and much of that legal AI development will be for good, helping free up attorney hours and democratizing representation access for businesses and individuals with limited resources).

Meanwhile, the availability — and affordability — of digital forensic experts who can tackle deepfakes often puts this avenue of evidence authentication out of reach.

The most proactive bet may be at the device level. "There are techniques that exist today that you can bake right into data collection itself that makes it a bit more trustable," said Madheswaran. Much like how devices instated time stamps and geo-locations, additional embedded evidence can authenticate original files or mark fabricated ones.

Take a new Google tool, for example. SynthID identifies AI-generated images by embedding inconspicuous watermarks into images to mark them as synthetic.

As Thompson puts it, the solution must be easy, and must be cost-effective, to really work. Techniques like this are both.

When it comes to official records of court proceedings, trained and certified humans must be present to avoid intentional or unintentional misrepresentation (at this point, no AI is backed by regulatory and licensing oversight like an official court reporter).

According to the National Artificial Intelligence Act of 2020, AI can "make predictions, recommendations or decisions influencing real or virtual environments." That is not something to be taken lightly.

"It's a trust problem," Madheswaran said about deepfakes in the courtroom.

Given that public belief that the U.S. justice system works as it should is at a historic low with just 44% of public trust, according to a 2023 Pew Research Center survey, the American judicial system should be careful about implementing AI and monitoring it.

Copyright CNBC
Exit mobile version