Lawyers blame ChatGPT for tricking them into citing false case law
NEW YORK (AP) – Two apologetic lawyers responding to an angry Manhattan federal court judge blamed ChatGPT on Thursday for tricking them into including fictitious legal research in a lawsuit.
Attorneys Steven A. Schwartz and Peter LoDuca are facing possible penalties for a lawsuit against an airline that included references to past lawsuits that Schwartz believed were real but were actually invented by the artificial intelligence-powered chatbot.
Schwartz explained that he used the groundbreaking program when he was hunting for legal precedents that supported a client̵[ads1]7;s case against the Colombian airline Avianca for an injury sustained on a 2019 flight.
The chatbot, which has mesmerized the world with its production of essay-like answers to questions from users, suggested several cases involving aviation accidents that Schwartz had been unable to find through normal methods used in his law firm.
The problem was that several of these cases were not real or involved airlines that did not exist.
Schwartz told U.S. District Judge P. Kevin Castel that he was “operating under a misapprehension … that this website obtained these cases from a source that I did not have access to.”
He said he “totally failed” to do follow-up investigations to ensure the quotes were correct.
“I didn’t realize ChatGPT could make cases,” Schwartz said.
Microsoft has invested around $1 billion in OpenAIthe company behind ChatGPT.
The success, which shows how artificial intelligence can change the way people work and learn, has raised fears from some. Hundreds of industry leaders signed a letter in May warning that “mitigating the risk of extinction from AI should be a global priority along with other societal risks such as pandemics and nuclear war.”
Judge Castel appeared both confused and disturbed by the unusual incident and disappointed that the lawyers did not act quickly to correct the false legal references when they were first alerted to the problem by Avianca’s lawyers and the court. Avianca pointed out the false case law in a filing in March.
The judge confronted Schwartz with one trial invented by the computer program. It was originally described as a wrongful death lawsuit filed by a woman against an airline, only to be transformed into a legal claim by a man who missed a flight to New York and was forced to incur additional expenses.
“Can we agree it’s legal bullshit?” Castel asked.
Schwartz said he mistakenly believed the confusing presentation was the result of excerpts being pulled from different parts of the case.
When Castel finished his questioning, he asked Schwartz if he had anything else to say.
“I want to apologize,” Schwartz said.
He added that he had suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and extremely remorseful.”
He said he and the firm where he worked – Levidow, Levidow & Oberman – had put in place safeguards to ensure nothing like this happens again.
LoDuca, another attorney who worked on the case, said he trusted Schwartz and did not consider what he had gathered.
After the judge read aloud portions of a cited case to show how easy it was to see that it was “nonsense,” LoDuca said, “It never occurred to me that this was a bogus case.”
He said the result “pains me endlessly”.
Ronald Minkoff, an attorney for the law firm, told the judge that the filing was “the result of carelessness, not bad faith” and should not result in sanctions.
He said lawyers have historically had a hard time with technology, especially new technology, “and it’s not getting any easier.”
“Mr. Schwartz, someone who hardly does any federal research, chose to use this new technology. He thought he was dealing with a standard search engine,” Minkoff said. “What he was doing was playing with live ammunition.”
Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he introduced the Avianca case during a conference last week that attracted dozens of attendees in person and online from state and federal governments . courts in the United States, including Manhattan federal court.
He said the topic caused shock and confusion at the conference.
“We’re talking about the Southern District of New York, the federal district that handles big cases, 9/11 to all the big financial crimes,” Shin said. “This was the first documented case of potential professional misconduct by a lawyer using generative AI.”
He said the case demonstrated how the lawyers may not have understood how ChatGPT works because it tends to hallucinate, talking about fictional things in a way that sounds realistic but isn’t.
“It highlights the dangers of using promising AI technologies without knowing the risks,” Shin said.
The judge said he will decide on sanctions at a later date.