Two New York lawyers have been fined after submitting a legal brief with fake case citations generated by ChatGPT.
Steven Schwartz, of law firm Levidow, Levidow & Oberman, admitted using the chatbot to research the brief in a client’s personal injury case against airline Avianca.
He had used it to find legal precedents supporting the case, but lawyers representing the Colombian carrier told the court they could not find some examples cited – understandable, given they were almost entirely fictitious.
Several of them were completely fake, while others misidentified judges or involved airlines that did not exist.
District judge Peter Kevin Castel said Schwartz and colleague Peter LoDuca, who was named on Schwartz’s brief, had acted in bad faith and made “acts of conscious avoidance and false and misleading statements to the court”.
Portions of the brief were “gibberish” and “nonsensical”, and included fake quotes, the judge added.
Read more:
Is ChatGPT the ultimate homework cheat?
Please use Chrome browser for a more accessible video player
2:16
While often impressive, generative AI like OpenAI’s ChatGPT and Google’s Bard have a tendency to “hallucinate” when giving answers, as it may not have a true understanding of the information it has been fed.
One of the concerns raised by those worried about the potential of AI regards the spread of disinformation.
Asked by Sky News whether it should be used to help write a legal brief, ChatGPT itself wrote: “While I can provide general information and assistance, it is important to note I am an AI language model and not a qualified legal professional.”
Judge Castel said there is “inherently improper” in lawyers using AI “for assistance”, but warned they have a responsibility to ensure their filings are accurate.
He said the lawyers had “continued to stand by the fake opinions” after the court and airline had questioned them.
Schwartz, LoDuca and their law firm were ordered to pay a total fine of $5,000 (£3,926).
Levidow, Levidow & Oberman is considering whether to appeal, saying they “made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth”.