AI Garbage Exposed in US Judgments

AI garbage in court submissions is a plague. Now, decisions from two US courts have also been exposed. An intern is to blame, says a caught judge.

listen Print view
Wooden judge's hammer, with a blurred statue of Justitia behind it

(Image: Zolnierek/Shutterstock.com)

5 min. read

Generative artificial intelligence is prone to so-called hallucinations. Often, this goes unnoticed, especially when the result pleases the user. Fabricated claims, studies, quotes, or precedents by AI are increasingly finding their way into court submissions; this has led to parties representing themselves multiple times and to <a href="/en/news/2024/04/10/ai-garbage-in-court-submissions-is-a-plague/" lawyers punished. Unfortunately, judges are not immune to the temptations of artificial intelligence either. In the USA, two cases of AI-contaminated judgments have now been exposed. The responsible judges take no personal responsibility.

The two US federal district judges, Henry Wingate of the U.S. District Court for the Southern District of Mississippi, and Julien Neals of the U.S. District Court for New Jersey, have issued decisions that were so obviously flawed that the parties to the proceedings noticed immediately. After their complaint, the two judges deleted their flawed decisions from the record and replaced them with improved versions.

These incidents prompted U.S. Senator Charles Grassley to act, who is concerned about the integrity of the U.S. justice system. He sent questions to the two federal judges. Their now available written responses show that a sense of responsibility is not strongly developed in the two judges.

At no point do they admit to bearing fault themselves. Judge Neals hints at self-pity when he writes that his "experience" in the case had been "most unfortunate and unforeseeable."

Judge Wingate blames a legal assistant who used the large language model Perplexity, "solely as a tool for a preliminary draft to compile publicly available information in the record."

According to Neals, a law student intern is allegedly to blame. This intern allegedly used ChatGPT "without authorization, without disclosure, contrary not only to the court's rules but also to applicable rules of (his) university." The judge explicitly refers to a notification from the university confirming this.

Both judges emphasize that such drafts would normally undergo a multi-stage review (including with software that checks references to precedents and uses AI itself). In these cases, the reviews were omitted before publication. They promise improvement but do not reveal the reason for the omission of the review.

Both men justify the deletion of their flawed decision from the record by stating that they wanted to avoid the flawed decisions being used as precedents later. In Wingate's case, the withdrawn decision is still preserved in the record as an attachment to a party's submission.

Apparently, neither court had a written set of rules for the use of AI. Neals claims to have verbally prohibited the use of AI; he now has a written draft of rules but is still awaiting guidelines from the federal court administration.

This administration has established a working group on the topic, which issued preliminary guidelines for all U.S. federal courts in the summer. These guidelines are surprisingly vague. They do not prohibit outsourcing the making of judgments to artificial intelligence, but merely encourage "caution." Special caution ("extreme caution") is "recommended" when dealing with new legal issues.

Videos by heise

Not even the disclosure of the use of artificial intelligence is mandatory, according to these guidelines. Judges and court staff are merely encouraged to consider whether AI use should be disclosed.

The court administration also explicitly does not collect data on whether and how often judges take measures against parties to proceedings for misleading use of AI. A revision of the Federal Rules of Evidence is currently being publicly consulted. According to <a href="https://www.uscourts.gov/sites/default/files/document/preliminary-draft-of-proposed-amendments-to-federal-rules_august2025.pdf" Draft of Rule 707, AI-generated evidence should be subject to the <a href="https://www.law.cornell.edu/rules/fre/rule_702" same rules as expert testimony. This means that courts should accept AI-generated evidence if it is more likely than not that the submission will assist the judge or jury and is based on sufficient facts or data and reliable principles and methods applied to the facts of the case.

The affected case at the U.S. District Court for New Jersey is In re CorMedix Inc. Securities Litigation, Case No. 2:21-cv-14020. The affected case at the U.S. District Court for the Southern District of Mississippi is Jackson Federation of Teachers, et al v Lynn Fitch, et al, Case No. 3:25-cv-00417.

(ds)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.