A prominent New York attorney is facing criticism after employing ChatGPT for legal research in a lawsuit against Avianca Airlines, a Colombian carrier.
Steven Schwartz, an esteemed lawyer from Levidow, Levidow & Oberman, was retained by Robert Mata to pursue a personal injury claim resulting from an incident involving a serving cart during a flight with the airline in 2019, as per CNN Business’s report on May 28.
However, the case took an unexpected turn when the presiding judge detected inconsistencies and factual inaccuracies within the submitted documentation. In a sworn affidavit dated May 24, Schwartz admitted to relying on ChatGPT for his legal research, expressing his unawareness of the potential for misinformation from the platform.
Remarkably, the judge’s April 5 court filing revealed startling findings. It highlighted six purported judicial decisions with fabricated quotes and spurious internal citations among the submitted materials. Furthermore, references to certain cases were non-existent, and a regrettable mix-up of docket numbers occurred between filings.
“[Schwartz] deeply laments the decision to incorporate generative artificial intelligence into the research conducted in this matter and pledges to never employ it again without rigorous verification of its authenticity.”
The integration of ChatGPT within professional spheres has sparked an ongoing debate about its capabilities and limitations. Recent reports indicate that ChatGPT’s cognitive abilities are rapidly advancing.
Nonetheless, many developers remain skeptical about its potential to entirely supplant human workers. Syed Ghazanfer, a blockchain developer, expressed his favor for ChatGPT while casting doubt on its ability to match the communication skills of human counterparts.
“To fully replace human workers, ChatGPT must comprehend requirements that surpass the confines of natural English communication. That is precisely why programming languages were invented,” Ghazanfer asserted.