With ChatGPT’s assistance, a lawyer fails a legal case

The plaintiff’s attorney in a case against Avianca Airlines employed the artificial intelligence chatbot ChatGPT to do legal research for a brief that included citations from fictitious court judgments.

The lawsuit began when Roberto Mata, a guy, filed a claim against the airline Avianca after being hurt.

Robert Mata’s attorneys filed a brief outlining more than six pertinent court rulings in response to Avianca’s request for the action to be dismissed. Nobody, not even the judge, could locate the rulings or quotes listed in the brief.

According to a New York Times story, it was discovered that ChatGPT had produced everything, including fake quotes, fake court rulings, and fake internal citations.

The brief’s author, Steven Schwartz, recently filed an affidavit claiming that he had utilized the AI-powered software to do his legal study, which had produced the bogus data.

Schwartz, a 30-year New York legal veteran, assured Judge Kevin Castel that he had no desire to mislead either the court or the airline.

AI Use in the Legal Profession: Concerns

Many people are concerned about the effects AI will have on the legal industry as well as the ethical issues surrounding its usage as it becomes more commonplace.

The problem is particularly pressing among attorneys, who have been discussing the benefits and risks of AI, according to Stephen Gillers, a professor of legal ethics at New York University School of Law.

Lawyers couldn’t « just take the output and cut and paste it into your court filings, » he continued. They had to independently validate whatever facts AI provided.

Even though the Roberto Mata v. Avianca case may have been an unusual event, it serves as a warning about the risks that might arise from the use of generative AI in the legal industry (and generally), especially if attorneys don’t take the time to confirm the material.

How ChatGPT’s Fake Citations Were Found Out

Mata filed a claim against Avianca after an airline worker struck him on Flight 670 from El Salvador to New York on August 27, 2019, using a service cart.

After then, Avianca requested that the lawsuit be dismissed by the court due to the statute of limitations.

Robert Mata’s attorneys responded by submitting a brief in March arguing that the case should go on while quoting and referencing a number of court rulings that have subsequently been proven to be false.

The attorneys for Avianca informed Judge Castel’s court that they had been unable to discover the instances mentioned in the brief, and that they had not been able to locate Varghese v. China Southern Airlines either by caption or citation, nor had they been able to uncover any cases that were similar to it.

Mr. Mata’s counsel complied with the judge’s request for copies of the decisions cited in their brief.

The eight judgements that Mata’s attorneys provided in a compilation were mentioned, for the most part, along with the court and judges who had rendered them, the docket numbers, and the dates.

Although ChatGPT had manufactured everything, Avianca’s attorneys informed the Court that they were unable to locate these views in court records or legal databases.

In an effort to ensure that generative AI won’t soon be able to replace attorneys as a profession, some lawyers are celebrating ChatGPT‘s failure.

As demonstrated in this particular instance, ChatGPT couldn’t possibly do all the necessary tasks performed by attorneys every day. Nevertheless, if its work is rigorously checked, ChatGPT can still help lawyers be more productive.

Recommended For You

About the Author: Ismaïl

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *