AI-Generated Evidence Making Its Way Into Courtrooms - Here's What We Know
- Last update: 1 days ago
- 2 min read
- 848 Views
- BUSINESS
Generative artificial intelligence (AI) is increasingly used across industries, online platforms, and for personal activities. While AI offers many advantages, it also raises concerns, such as students using it to cheat or copyright disputes, including lawsuits from authors against AI tools.
Recently, AI has started appearing in legal settings. In a California housing case, Mendones v. Cushman & Wakefield, Inc., a video submitted as witness testimony was discovered to be a deepfake created by AI. Judge Victoria Kolakowski questioned the video's authenticity, and after verification, the case was dismissed. This incident has sparked concern nationwide about AI being misused in court.
DeepfakesAI-generated media mimicking someones voice or appearancepose significant risks. Fake recordings or visuals could be weaponized against individuals, and judges fear that manipulated content might lead to wrongful convictions.
In response, several measures have been introduced. California passed a law requiring AI chatbots to disclose that they are not human. The National Center for State Courts (NCSC) released a guide for legal professionals to evaluate AI-generated evidence. This guide outlines nine steps, including examining how evidence was obtained, maintaining a chain of custody, disclosing any edits, using forensic verification, and reviewing metadata to verify authenticity and detect discrepancies.
Federal initiatives have also addressed AI in legal contexts. The mid-2025 Americas Action Plan proposed programs to detect deepfakes and standardize evidence analysis. Additionally, the TAKE IT DOWN Act targets deepfakes with sexual content, and Louisiana legislation requires AI-generated evidence to be verified before being admitted in court.
Legal actions involving AI are already emerging, such as lawsuits against AI developers linked to harm caused by their technology. However, AI can have legitimate benefits in court when its use is transparent. The NCSC differentiates between acknowledged AIclearly stated and purpose-drivenand unacknowledged AI, which is used without disclosure. Acknowledged AI can enhance court proceedings by improving video and audio clarity, assisting with legal research, and analyzing surveillance footage.
Despite its potential, AI carries risks. Misidentifications in surveillance analysis have led to wrongful arrests, highlighting the need for careful management and oversight to ensure fairness and accuracy in the legal system. Ongoing efforts focus on integrating AI safely while addressing its misuse.
Author: Harper Simmons
Share
Debunked: Prisoner Escapes Through Courtroom Ceiling Video After Sentencing Not Real -- It's AI-Generated
2 days ago 2 min read WORLD
"We've Unleashed Pandora's box": Jenna Ortega Discusses AI in the Entertainment Industry
2 days ago 3 min read ENTERTAINMENT
Study claims that AI models can be tricked by poetry into revealing nuclear weapons secrets
4 days ago 2 min read SCIENCE
If AI's Ability to Lie is Disabled, It Claims to be Conscious
4 days ago 3 min read SCIENCE
Poetry can bypass AI's safety features, research shows
4 days ago 3 min read SCIENCE