"The risks of using ChatGPT and other similar tools for legal purposes was recently quantified in a January 2024 study: Matthew Dahl et. al, "Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models" (2024) arxIV:2401.01301. The study found that legal hallucinations are alarmingly prevalent, occurring between 69% of the time with ChatGPT 3.5 and 88% with Llama 2. It further found that large language models ("LLMs") often fail to correct a user's incorrect legal assumptions in a contrafactual question setup, and that LLMs cannot always predict, or do not always know, when they are producing legal hallucinations. The study states that '[t]aken together, these findings caution against the rapid and unsupervised integration of popular LLMs into legal tasks."

The Honourable Justice D.M. Masuhara,
Zhang v. Chen, at paragraph 38

As a lawyer who has devoted his entire career to providing legal services, such as legal research, to other lawyers and institutional clients, the growing popularity of artificial intelligence ("AI") tools and demands for their use in the legal community gives me pause to consider whether my job will soon be replaced by a machine. Although this may happen one day, court cases continue to demonstrate that this may not happen soon and that lawyers, paralegals and self-represented litigants would be wise to avoid the use of AI for legal research. Among other things, a lawyer who presents fake cases to a court risks being ordered to pay costs for their indiscretion, in addition to being judicially scorned in a publicly-available court decision.

In Zhang v. Chen, 2024 BCSC 285 (CanLII), a lawyer was ordered to pay some costs of their opponent as a result of citing two fake cases in a notice of application she filed with the court.

The parties were involved in a family law dispute. The father wanted parenting time in China with the parties' three children who resided with their mother in British Columbia. In the notice of application, the father's lawyer referred to cases which supported his position. However, these cases, generated by AI, were non-existent.

Prior to the matter being determined by the court, the mother's lawyers discovered that the cases were non-existent and requested copies of the cases from the father's lawyer.

After repeated requests, the father's lawyer finally apologized for the reference to the fake cases and indicated that other cases would be relied upon at the hearing. The lawyer also prepared an email to the court apologizing for the reference to the non-existent cases in the notice of application and admitting that they were fake.

The court ruled against the father's application and, in rendering its costs ruling, was required to consider whether the father's lawyer should pay any of the costs of the mother's lawyers.

While the mother's lawyers sought special costs under British Columbia's Supreme Court Family Rules and the inherent jurisdiction of the court, the father's lawyer submitted special costs were not warranted because the fake cases had been discovered in advance of the hearing of the application and were withdrawn, and that there was no intentional act to mislead the court.

In an affidavit, the father's lawyer described lacking knowledge of the risks of AI use for legal research and that she had no intention of generating or referring to fictitious cases in the proceedings. The lawyer described being remorseful about her conduct and that she was deeply embarrassed by the matter. Indeed, the lawyer's use of non-existent cases had gained public media attention.

In the circumstances, the court rejected awarding special costs against the father's lawyer. The judge explained that awarding special costs against a lawyer was an extraordinary remedy that required a finding of reprehensible conduct or abuse of process by a lawyer.

Although the judge stated that citing fake cases in court filings and other materials handed up to the court was an abuse of process that could lead to a miscarriage of justice, the following mitigating circumstances existed:

  • the fake cases had been withdrawn before the hearing;
  • the fake cases would not have been argued in support of the application;
  • the mother's lawyers were well resourced and would have discovered that the cases were non-existent before the hearing; and
  • the requirement to file a book of authorities would have resulted in the discovery that the cases were non-existent before the hearing.

Notwithstanding the foregoing, the father's lawyer was held responsible for part of the costs of the mother's lawyers under rule 16-1(30)(c) and (d) of the Supreme Court Family Rules.

Under these rules, there was no requirement for reprehensible conduct on the part of the lawyer or for abuse of process. The father's lawyer was unaware of various notices from the governing Law Society about AI-generated materials. She was also unaware that output from AI could be inaccurate and that the use of ChatGPT was not a substitute for professional advice.

As well, the court found that as a result of the insertion of fake cases into the notice of application and the delay in remedying the confusion that the cases had created, the mother's lawyers were required to spend time and incur expense to deal with the cases. Accordingly, costs, including disbursements reasonably incurred, were awarded personally against the father's lawyer for four ½ days, or two hearing days in total.

At the time the judge made this costs award against the father's lawyer, two additional cases from the United States highlighted the frailties of the use of AI in legal research. In the Missouri case of Kruse v. Karlen, a litigant filed a brief in which 22 of the 24 cases cited were fictitious. In the Massachusetts case of Smith v. Farwell, a lawyer filed three separate legal memoranda that cited and relied on fictitious cases. Costs of $10,000 were awarded against the self-represented litigant in the Missouri case. Costs of $2,000, payable to the court, were awarded against the lawyer in the Massachusetts case. These cases and others have received much media attention in the legal community and it is puzzling to find that some lawyers continue to use AI for generating case law for briefs to be filed in court without conducting basic due dililgence to confirm that the cases do in fact exist.

The key takeaway from Zhang v. Chen and the two U.S. cases is that lawyers, paralegals and self-represented parties should avoid using AI to conduct legal research. Cutting corners in legal research will only lead to embarrassment and the loss of reputation for a professional who uses fake cases in court material and can result in, among other potential penalties, significant personal costs awards. A PDF version is available to download here.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.