Using AI as Pro Se

Since ChatGPT was released to the public in late 2022, “artificial intelligence” has been one of, if not the, most discussed topics in global society. AI promised to make our lives easier by bringing expertise to our fingertips, automating our most mundane tasks, and even making the intricacies of the justice system accessible.  

Now, AI engines like ChatGPT, Gemini, Claude, and Grok are exploding in popularity and the tech sector is experiencing its own sort of arms race. Companies are rushing to release the latest and greatest AI product, making promises about their potential to disrupt industries across the globe. Though AI is more than capable of generating a grocery list or giving you directions, problems arise when this technology is prematurely implemented in our justice system. AI in court can and already has produced dire and potentially long-lasting consequences.

When someone chooses to represent themselves in court, they become what’s called a pro se litigant. Recently, courts across the country have seen these pro se litigants utilize generative AI tools to help write court filings. After all, these AI programs can produce official-sounding legal language, case references, and arguments in seconds.

Despite its surface-level research abilities, the current state of AI makes it a risky tool for litigation, especially without the assistance of an attorney. The danger lies in a fundamental misunderstanding of what the most popular AI products do. They are not databases of verified truth but are, instead, probabilistic engines designed to predict the next likely word in a sequence based on vast, unverified training data. They are what we call them: Large Language Models (LLMs).

Family law cases, such as those involving custody, child support, equitable distribution, and alimony, have been particularly impacted. These cases often involve litigants without financial resources and protracted litigation against an ex-spouse or co-parent can get expensive quickly.

When a pro se litigant asks an AI to “write a motion for emergency custody in North Carolina,” the output is a statistical approximation of the legal document. It is not a verified legal instrument. It may cite non-existent statutes, conflate North Carolina’s equitable distribution laws with California’s community property rules, or fabricate case law entirely, a phenomenon known as “hallucination”.

Rule 11 of the North Carolina Rules of Civil Procedure requires that all filings with the court be signed. It explicitly requires that the filing “is well grounded in fact and is warranted by existing law or a good faith argument for the extension, modification, or reversal of existing law, and that it is not interposed for any improper purpose, such as to harass or to cause unnecessary delay or needless increase in the cost of litigation.”

North Carolina’s courts hold pro se litigants to the same standards as attorneys when it comes to the accuracy and good faith of filings. This means if a litigant submits an AI-generated complaint or motion that isn’t well-grounded in fact and law, they may be subject to Rule 11 and the consequences thereof. These consequences, also known as sanctions, “may include an order to pay to the other party or parties the amount of the reasonable expenses incurred because of the filing of the pleading, motion, or other paper, including a reasonable attorney’s fee.”

Already, courts have begun to explicitly remind pro se litigants that they don’t get a free passjust because they don’t have a lawyer. For example, a pro se plaintiff filed an “inartful” brief in opposition to a motion, which the opposing counsel suspected was written with an AI tool. The judge acknowledged the usual “latitude”given to pro se litigants but still warned that the litigant must follow the court’s rules and that relying on generative AI “may result in sanctions or penalties when used inappropriately.”

In other words, self-representing won’t exempt litigants from the legal ramifications of submitting shoddy, AI-driven filings. In 2024, a Missouri Court of Appeals panel issued a stark reminderthat the “barred and self-represented alike” are expected to be truthful and not commit fraud on the court. To drive the point home, the Court of Appeals imposed a $10,000 fine on the pro se party and dismissed their appeal as “frivolous.”

In addition to fabricating case law, generative AI can also invent entire causes of action or encourage pro se litigants to file motions that don’t exist. For example, a pro se litigant may want the Court to require an opposing party to sell their home pursuant to their separation agreement. While the appropriate filing would be a claim for “breach of contract,” an AI engine, wanting to keep its customer satisfied, may draft a “complaint for forced sale of a piece of real property.” A court, upon receiving this filing, wouldn’t be able to make heads or tails of what the complaint is asking for.

Given the ease with which AI can generate legal filings, it’s easy for a pro se litigant to overwhelm their case file with numerous pleadings, motions and responses. Judges, attorneys, and court personnel will spend significant time reading, analyzing, and responding to these filings. If pro se litigants aren’t careful with their use of generative AI, they could be the ones paying for that time.


John Boschini is an attorney with Law Firm Carolinas and an adjunct professor at UNC Greensboro and Elon University School of Law. Since being admitted to the North Carolina State Bar in 2015, John has had a wide range of legal experience. He has represented close to a thousand clients in hearings before Administrative Law Judges in Social Security Disability hearings. He joined Law Firm Carolinas in 2021 to develop his Family Law practice and facilitated the practice’s growth into new areas of the law. 

Litigation