A report published on October 23, 2025 by Reuters details how two U.S. federal judges acknowledged that the use of artificial intelligence (AI) in drafting court decisions contributed to errors. (Reuters)
What happened
- Julien Xavier Neals, a U.S. District Judge in New Jersey, stated in a letter that a draft decision in a securities-lawsuit case “was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.” The letter revealed that a law-school intern used ChatGPT for research without authorization or disclosure. (Reuters)
- Henry Wingate, a U.S. District Judge in Mississippi, said in his letter that a law clerk used the AI-tool Perplexity “as a foundational drafting assistant to synthesize publicly available information on the docket.” He described the issuance of the draft decision as “a lapse in human oversight.” (Reuters)
- Both judges indicated that the decisions in question did not go through their chambers’ “typical review processes” before issuance. (Reuters)
Why it matters
- The acknowledgment by federal judges that AI‐assisted work contributed to judicial errors raises concerns about oversight, quality control, and transparency in the use of technology in court systems.
- The fact that AI was used without disclosure to the court or perhaps without proper vetting underscores the need for clear policies and review mechanisms when new technologies are introduced into legal‐judicial workflows.
- The use of generative AI by court staff or interns may create risks regarding accuracy of legal analysis, potential for factual inaccuracies, and the rights of litigants if mistakes go uncorrected or undetected.
- The development of written AI policies and enhanced review procedures in the chambers of these judges suggests a recognition that the use of AI and automated tools in the judiciary cannot be left to ad hoc or informal processes.
- The call by a Senate committee‐chair for stronger guidelines signals this is likely to become a matter of legislative or regulatory attention rather than purely internal court administration.
If you like, I can pull in additional commentary from judicial scholars or track how other jurisdictions are handling AI use in courts. Would you like me to do that?