https://nighthawkrottweilers.com/

Business

ChatGPT cited “false” cases for a New York federal lawsuit




The Thurgood Marshall Courthouse is pictured in Manhattan, New York, on October 15, 2021.

Brendan McDermid | Reuters

Roberto Mata’s lawsuit against Avianca Airlines was not that different from many other personal injury lawsuits filed in New York federal court. Mata and his lawyer, Peter LoDuca, alleged that Avianca caused Mata’s injuries when he was “struck by a metal serving cart” on board a 2019 flight bound for New York.

Avianca moved to dismiss the case. Mata’s lawyers predictably opposed the motion, citing a series of legal decisions, which are typical in courtroom courtroom battles. Then everything fell apart.

Avianca’s lawyers told the court that it could not find many court cases that LoDuca had cited in his response. Federal Judge P. Kevin Castel required LoDuca to provide copies of nine court decisions that were apparently used.

In response, LoDuca filed the full text of eight cases in federal court. But the problem only deepened, Castel said in a filing, because the texts were fictitious, citing what appeared to be “false court decisions with false citations and false internal citations.”

The culprit, it would eventually emerge, was ChatGPT. OpenAI’s popular chatbot had been “hallucinating” – a term for when artificial intelligence systems simply invent false information – spitting out cases and arguments that were purely fiction. It appeared that LoDuca and another attorney, Steven Schwartz, had used ChatGPT to generate the motions and subsequent legal text.

Schwartz, an associate at the law firm Levidow, Levidow & Oberman, told the court that he had been the one working around ChatGPT and that LoDuca had “no role in conducting the research in question” nor “any knowledge of how said research was conducted.”

Opposing counsel and the judge had first realized that the cases did not exist, giving the lawyers involved an opportunity to admit the error.

However, LoDuca and his firm appeared to double down on their use of ChatGPT, using it not only for the original problematic filing, but to generate bogus legal rulings when asked to issue them. Now LoDuca and Schwartz could face a legal sanction, a move that could even lead to a ban.

The defense motion was “filled with citations to non-existent cases,” according to a court filing.

“The court is presented with an unprecedented circumstance,” Castel said. He set a hearing for June 8 when both LoDuca and Schwartz will be called to explain themselves. Neither attorney responded to CNBC’s request for comment.



Source link

Back to top button

mahjong slot

https://covecasualrestaurant.com/

sbobet

https://mascotasipasa.com/

https://americanturfgrass.com/

https://www.revivalpedia.com/

https://clubarribamidland.com/

https://fishkinggrill.com/