advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

A lawyer’s use of ChatGPT in the courtroom did not end well

  • In a lawsuit involving Colombian airline Avianca, a lawyer made use of ChatGPT to do his research, with the AI platform generating several fake cases.
  • Sanctions are now being contemplated for the lawyer after he submitted a file full of fake cases.
  • ChatGPT was not able to provide sources for the fake cases but claimed they were real.

There has been much fervour surrounding generative AI platforms, and OpenAI’s ChatGPT in particular, when it comes to how they can be implemented in different industries. When it comes to the legal realm and the courtroom, it looks like generative AI still has some ways to go.

A recent lawsuit where a lawyer was trying to sue Colombian airline Avianca did not go as expected.

According to a report by The New York Times (paywall), the lawyer submitted a brief that was full of cases involving Avianca to support his argument.

The problem, however, is that several of the past cases turned out to be fake, and fabricated specifically by ChatGPT.

The lawyer in question, Steven A. Schwartz, admitted as much during an affidavit, and could now face sanctions based on the fact that as many as six of the past cases he submitted were falsified as a result of poor vetting on his end.

“Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” noted US District Judge Kevin Castel in this case.

Schwartz explained that he queried the validity of the cases with ChatGPT, and while the generative AI apologised for some of the confusion, it did state the cases it created were in fact true. This was not the case, with no sources being able to be shared by ChatGPT either.

As such, this may be the most high-profile example to date where reliance on ChatGPT for accuracy has backfired.

Ever since the platform and others like it have risen to prominence, there have been instances where the generative capabilities of these AI platforms have resulted in fabrications or incorrect information being created.

This is commonly referred to using the term “hallucinations“, which is something that OpenAI, Bing, Google, and others have cited for users to be aware of.

Whether this will serve as a cautionary tale for those placing too much faith in generative AI, especially in its early development phase, remains to be seen, but it’s clear that Schwartz and other lawyers likely be making the same mistake again.

The lawyer says he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity,” as he waits to hear what his fate will be.

[Image – Photo by Tingey Injury Law Firm on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement