Two New York lawyers have been fined for submitting legal briefs with fake case citations generated by ChatGPT.
Steven Schwartz of law firm Levidow, Levidow & Oberman admitted to using a chatbot to research summaries of clients’ personal injury cases against Avianca.
He used it to find legal precedent to support the case, but lawyers representing Avianca told the court they could not find some of the examples cited — understandable because they are almost entirely fictional.
Some of them are outright fake, while others misidentify judges or involve non-existent airlines.
District Judge Peter Kevin Castel said Schwartz and colleague Peter Loduka (named in Schwartz’s letter) acted maliciously and “consciously avoided acting , and made false and misleading statements to the court”.
Portions of the brief were “nonsensical” and “nonsensical” and contained false citations, the judge added.
Is ChatGPT the ultimate homework cheat?
While generative AI like OpenAI is often impressive Chat GPT and google bard have Tendency to “hallucinate” when giving answersas it may not really understand the information provided.
one of the Concerns raised by those concerned about the potential of AI Regarding the spread of disinformation.
Asked by Sky News if it should be used to help write legal summaries, ChatGPT itself wrote: “While I can provide general information and assistance, it is important to note that I am an AI language model and not a qualified legal professional person.”
Judge Castell said it was “inherently inappropriate” for lawyers to use artificial intelligence to “seek help”, but warned they had a duty to ensure the documents they submitted were accurate.
He said lawyers “continue to insist on bogus opinions” following challenges from courts and airlines.
Schwartz, LoDuca and their law firm were ordered to pay a total of $5,000 (£3,926) in fines.
Levidow, Levidow & Oberman is considering whether to appeal, saying they “made a benign mistake in not believing a technology could completely falsify a case.”