Home Others Here’s What Happens When Your Lawyer Uses ChatGPT – UnlistedNews

Here’s What Happens When Your Lawyer Uses ChatGPT – UnlistedNews

0
Here’s What Happens When Your Lawyer Uses ChatGPT – UnlistedNews

The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal service cart struck his knee during a flight to Kennedy International Airport in New York.

When Avianca asked a federal judge in Manhattan to dismiss the case, Mata’s lawyers vehemently objected, submitting a 10-page brief citing more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its scholarly discussion of federal law and “the negative effect of automatic stay on a statute of limitations.”

There was only one problem: no one, not the airline’s lawyers, not even the judge himself, could find the decisions or quotes cited and summarized in the brief.

That was because ChatGPT had invented the whole thing.

The attorney who created the brief, Steven A. Schwartz of the firm of Levidow, Levidow & Oberman, entered the court’s mercy on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research: “ a source that has been revealed as unreliable.”

Mr. Schwartz, who has practiced law in New York for three decades, told Judge P. Kevin Castel that he had no intention of misleading the court or the airline. Mr. Schwartz said that he had never used ChatGPT and “was therefore not aware of the possibility that its content could be fake.”

He told Judge Castel that he had even asked the show to verify that the cases were real.

he had said yes.

Schwartz said he is “very sorry” to have relied on ChatGPT “and will never do so in the future without absolute verification of its authenticity.”

Judge Castel said in an order that he had been presented with “an unprecedented circumstance,” a legal filing replete with “false court decisions, with false citations and false internal citations.” He ordered a hearing for June 8 to discuss possible sanctions.

As artificial intelligence sweeps the online world, it has conjured up dystopian visions of computers replacing not just human interaction, but human labor as well. The fear has been especially intense for knowledge workers, many of whom worry that their daily activities are not as rarefied as the world thinks, but for which the world pays billable hours.

Stephen Gillers, a professor of legal ethics at New York University Law School, said the problem was particularly acute among lawyers, who have been debating the value and dangers of AI software like ChatGPT, as well as the need for to verify any information you provide. .

“The discussion now among the bar is how to avoid exactly what this case describes,” Gillers said. “You can’t just take the output and cut and paste it into your court papers.”

The real life case of Roberto Mata v. Avianca Inc. shows that white-collar professions may have at least a little time before robots take over.

It began when Mata was a passenger on Avianca flight 670 from El Salvador to New York on Aug. 27, 2019, when an airline employee hit him with the service cart, according to the lawsuit. After Mr. Mata filed the lawsuit, the airline filed documents requesting that the case be dismissed because the statute of limitations had expired.

In a brief filed in March, Mata’s lawyers said the lawsuit should go ahead, bolstering their argument with references and citations to the many court decisions that have since been discredited.

Soon, Avianca’s lawyers wrote to Judge Castel, saying they could not find the cases cited in the brief.

When it came to Varghese v. China Southern Airlines, said they “had not been able to locate this case by title or citation, nor any case that resembled it.”

They pointed to a lengthy quote of Varghese’s alleged decision contained in the brief. “The undersigned has not been able to locate this quote, or anything similar in any case,” Avianca’s lawyers wrote.

In fact, the lawyers added, the quote, which came from Varghese himself, cited something called Zicherman v. Korean Air Lines Co. Ltd., an opinion purportedly issued by the US Court of Appeals for the 11th Circuit in 2008. They said I couldn’t find that either.

Judge Castel ordered Mr. Mata’s lawyers to deliver copies of the opinions referred to in his brief. The lawyers presented a compendium of eight; in most cases, they listed the court and judges who issued them, docket numbers, and dates.

The copy of the alleged Varghese decision, for example, is six pages long and says it was written by a member of a three-judge panel for the 11th Circuit. But Avianca’s lawyers told the judge they couldn’t find that opinion, or the others, in court files or legal databases.

Bart Banino, a lawyer for Avianca, said his firm, Condon & Forsyth, specializes in aviation law and that its lawyers could tell the cases in the brief were not real. He added that they had an indication that a chatbot might have been involved.

Schwartz did not respond to a message seeking comment, nor did Peter LoDuca, another attorney for the firm, whose name appeared in the brief.

Mr. LoDuca said in an affidavit this week that he did not conduct any of the investigations in question and had “no reason to doubt the sincerity” of Mr. Schwartz’s work or the authenticity of the opinions.

ChatGPT generates realistic responses by guessing which pieces of text should follow which other sequences, based on a statistical model that has ingested billions of text examples pulled from the Internet. In Mr. Mata’s case, the show seems to have discerned the labyrinthine framework of a written legal argument, but populated it with names and facts from a bouillabaisse of existing cases.

Judge Castel, in his order for the hearing, suggested that he had done his own investigation. He wrote that the clerk of the 11th Circuit had confirmed that the docket number printed in Varghese’s purported opinion related to an entirely different case.

Qualifying the opinion as “false”, Judge Castel pointed out that it contained internal quotes and quotes that, in turn, were non-existent. He said that five of the other decisions submitted by Mr. Mata’s lawyers also appeared to be false.

On Thursday, Mata’s lawyers offered sworn statements with their version of what happened.

Mr. Schwartz wrote that he had originally filed Mr. Mata’s lawsuit in state court, but after the airline transferred it to federal court in Manhattan, where Mr. Schwartz is not licensed to practice, one of his colleagues , Mr. LoDuca, became the attorney of record. Schwartz said he had continued the legal investigation, in which LoDuca had no role.

Schwartz said that he had consulted ChatGPT “to supplement” his own work and that, “in consultation” with it, he found and cited the half dozen non-existent cases. He said that ChatGPT had provided guarantees.

“Es varghese a real case,” he wrote, according to a copy of the exchange he presented to the judge.

“Yes,” the chatbot replied, offering a quote and adding that “it’s a real case.”

Mr. Schwartz dug deeper.

“What is your source?” he wrote, according to the file.

“I apologize for the earlier confusion,” ChatGPT replied, offering a legal subpoena.

“Are the other cases you provided fake?” Schwartz asked.

ChatGPT responded: “No, the other cases I provided are real and can be found in reputable legal databases.”

But unfortunately, they could not be.

Sheelagh McNeil contributed research.

Source

LEAVE A REPLY

Please enter your comment!
Please enter your name here