As the use of Artificial Intelligence technology increases in the legal profession, so has the number of AI-generated hallucinations in legal cases.
In April 2025, Damien Charlotin, a professor at Sciences Po Law School, tasked himself with creating a dataset to explore how large language models are used in law. He collected cases involving AI hallucinations around the world, focusing on the party using AI, the AI tools, the nature of the hallucinations and the outcome. Charlotin continues his project today and regularly updates the data.
According to Charlotin’s dataset, from July 2023 to October 2025, there have been over 342 legal cases in the United States that included AI hallucinated content. Of these cases, 299 occurred during the period for which data were collected in 2025.
Through filtering the dataset by year, it is evident that only nine cases occurred in 2023 and 34 occurred in 2024, accounting for the steep increase in cases from 2024 to 2025.
“There are too many cases to be comfortable, so there is a need for ways to deal with this,” Charlotin said.
AI Hallucinations come in a variety of forms, including fabricated cases and citations, false quotes and misrepresentations. Fabricated cases and citations refer to instances in which the AI produces or references nonexistent cases or citations, and false quotes include incorrect quote attributions, inaccurate language and nonexistent quotes. Misrepresentations refer to instances in which a lawyer or Pro Se Litigant wrongfully claims a case represents a particular principle or includes supporting language as a result of what an AI source told them.
Charlotin’s data made it possible to create a table that displays the count of each type of hallucination in addition to instances of cases featuring multiple types. Using these numbers, we could calculate the percentages of each combination of hallucination types by dividing the individual values by the total number of cases.
Out of 342 cases included in the data, 91% included hallucinated cases and citations. Of this 91%, more than 50% were accompanied by other forms of hallucinations. Nearly 10% of the cases involved all three types of hallucinations–cases and citations, false quotes and misrepresentations.
According to an article by Zach Warren, a senior manager at Thomson Reuters, the need for verification of case citations has increased due to generative AI hallucinations. Warren also emphasized that courts are holding attorneys responsible for AI misuse because they should check all documents before filing them.
“Most cases are just mistakes, and we can just move away from them, but a lot of cases are just lawyers who are bad lawyers, sloppy lawyers, incompetent and reckless lawyers, and you know they would have gotten away before that,” Charlotin said.
Parties that fail to verify documents prior to their use in court can face serious consequences. In some cases, parties using AI simply received warnings. Other cases resulted in more severe penalties, including orders to show cause and monetary sanctions. Of the cases included in the data, the highest monetary penalty was over $31,000. In this particular case, a lawyer used multiple AI tools to generate legal arguments, resulting in nine incorrect or fabricated citations and multiple false quotes. Monetary penalties are typically higher for parties who refuse to take accountability or own up to their misuse of AI.
“You’ve got lawyers who refuse to own up to their mistakes, you’ve got lawyers who lie, who double down, blame the intern, etc,” Charlotin said.
Jackson Hagen, a managing associate at Orrick, Herrington & Sutcliffe’s Washington, D.C., office, said he does not use AI for research because “that is a huge risk.”
“I’ve definitely heard my colleagues who have run into that when asking AI things like ‘Give me five cases that stand for X principle,’ and it gives cases that don’t exist or cites things that don’t have that meaning,” Hagen said.
Through examining the count of each AI tool used across cases, it became clear that a variety of different AI tools were present, and some of the cases employed multiple tools. Instances where courts deemed AI usage was “implied” accounted for 195 of the cases. Charlotin defined these cases as, “the court suspects that AI has been used, and that no party confessed of using AI.” In 89 cases, the party confessed to using AI but did not identify the exact tool. ChatGPT was present in 30 cases in addition to cases that used less common or internal tools.
Although AI hallucinations pose a definite risk to legal practitioners, this does not mean that AI cannot be used productively and effectively in law.
“I’m not making the database to make the point that AI is bad. On the contrary…AI is great for a lot of things in the legal practice. I use AI all day every day in my legal practice. I think like every technology, there are some things that need to be ironed out, some wrinkles in the way it reacts and is being used. But in terms of the legal practice, it’s great because you have a tool that can do a lot of things, can give you a lot of good advice,” Charlotin said.
Hagen uses Orrick’s internal AI tool for non-research related work, in particular, to summarize transcripts of Congressional hearings or books that are relevant to case matters.
“It does sometimes summarize things wrong and doesn’t totally understand what the member of congress or person is meaning…I always go through the video to fact-check what has been said in the hearing,” Hagen said.
As AI becomes more prevalent in every field, it is crucial that legal professionals follow Hagen’s example and verify and fact-check the technology’s accuracy rather than rely on it in ways that produce worse results.