英语轻松读发新版了,欢迎下载、更新

AI hallucinations in court documents are a growing problem, and lawyers are responsible for many of them

2025-05-27 10:33:00 英文原文

作者:Jack Newsham

A judge's gavel sits atop of stack of legal filings in binders

Judges are catching fake citations of legal authorities almost every day, and it's increasingly the fault of lawyers over-relying on AI. May Lim / 500px/Getty Images/500px
  • Since May 1, judges have called out at least 23 examples of AI hallucinations in court records.
  • Legal researcher Damien Charlotin's data shows fake citations have grown more common since 2023.
  • Most cases are from the US, and increasingly, the mistakes are made by lawyers, not laypeople.

Judges are catching fake legal citations more frequently, and it's increasingly the fault of lawyers over-relying on AI, new data shows.

Damien Charlotin, a legal data analyst and consultant, created a public database of 120 cases in which courts found that AI hallucinated quotes, created fake cases, or cited other apparent legal authorities that didn't exist. Other cases in which AI hallucinates might not draw a judge's attention, so that number is a floor, not a ceiling.

While most mistakes were made by people struggling to represent themselves in court, data shows that lawyers and other professionals working with them, like paralegals, are increasingly at fault. In 2023, seven out of 10 cases in which hallucinations were caught were made by so-called pro se litigants, and three were the fault of lawyers; last month, legal professionals were found to be at fault in at least 13 of 23 cases where AI errors were found.

"Cases of lawyers or litigants that have mistakenly cited hallucinated cases has now become a rather common trope," Charlotin wrote on his website.

The database includes 10 rulings from 2023, 37 from 2024, and 73 from the first five months of 2025, most of them from the US. Other countries where judges have caught AI mistakes include the UK, South Africa, Israel, Australia, and Spain. Courts around the world have also gotten comfortable punishing AI misuse with monetary fines, imposing sanctions of $10,000 or more in five cases, four of them this year.

In many cases, the offending individuals don't have the resources or know-how for sophisticated legal research, which often requires analyzing many cases citing the same laws to see how they have been interpreted in the past. One South African court said an "elderly" lawyer involved in the use of fake AI citations seemed "technologically challenged."

In recent months, attorneys in high-profile cases working with top US law firms have been caught using AI. Lawyers at the firms K&L Gates and Ellis George recently admitted that they relied partly on made-up cases because of a miscommunication among lawyers working on the case and a failure to check their work, resulting in a sanction of about $31,000.

In many of the cases in Charlotin's database, the specific AI website or software used wasn't mentioned. In some cases, judges concluded that AI had been used despite denials by the parties involved. However, in cases where a specific tool was mentioned, ChatGPT is mentioned by name in Charlotin's data more than any other.

Charlotin didn't immediately respond to a request for comment.

Read next

Where Big Tech secrets go public — unfiltered in your inbox weekly.

关于《AI hallucinations in court documents are a growing problem, and lawyers are responsible for many of them》的评论


暂无评论

发表评论

摘要

Judges are increasingly identifying fake legal citations created by AI, with most recent cases originating from the US and involving lawyers rather than laypeople. Legal data analyst Damien Charlotin's database documents 120 instances where courts have caught AI-generated hallucinations since May 1. In 2023, seven out of ten such errors were made by individuals representing themselves; however, in recent months, legal professionals have been responsible for an increasing number of these mistakes. Courts worldwide are penalizing the misuse of AI with fines, sometimes exceeding $10,000. Notably, ChatGPT is mentioned more frequently than any other specific tool in cases where a particular AI platform was identified.

相关新闻