In a surprising twist that reads like a script from a legal drama, Michael Cohen, previously known for his role as Donald Trump’s lawyer, fell into an AI trap! The scene unfolds with Cohen using Google Bard, which he mistook for a high-powered search engine, only to end up citing completely made-up court cases in a serious legal document. This isn’t just another day in the courthouse – it’s a cautionary tale of technology’s tricky terrain.
The Misadventure Begins: Trusting AI Too Much
Michael Cohen’s journey down this unusual path began with a simple misunderstanding. To shorten his three-year probation, Cohen drafted a motion for a federal judge. This task led him to Google Bard, which he mistook for a “super-charged search engine.”
The reality, however, was far from it. Google Bard, a cousin of the famed ChatGPT, is an AI chatbot capable of generating convincing but potentially fictitious content.
The Plot Thickens: AI-generated Court Cases in Legal Documents
The motion, prepared with the help of AI, included citations from court cases. But there was a catch – these cases were entirely fictional, a product of the AI’s imagination. US District Judge Jesse Furman, upon reviewing the letter brief, was baffled to find that “none of these cases exist.” This revelation prompted a whirlwind of questions directed at Cohen’s lawyer, David Schwartz, and brought to light Cohen’s significant error.
Cohen’s Confession: A Misunderstanding of Technology
In a written statement, Cohen expressed that he had no intention to mislead the court. His use of Google Bard for legal research was based on a misunderstanding of its capabilities. He acknowledged his lack of awareness regarding the emerging trends in legal technology, especially the risks associated with generative text services like Google Bard and ChatGPT.
- Misinterpreting AI as a Reliable Source: Cohen thought of Google Bard as a highly efficient search engine, not realizing its potential to create fictitious data.
- The Risk of AI in Legal Contexts: The incident highlights how AI-generated content, if not verified, can lead to serious legal mishaps.
- The Human-AI Trust Conundrum: Cohen’s situation underscores the importance of understanding and critically assessing AI-generated information.
Not an Isolated Incident: AI’s Growing Influence in Legal Arenas
Interestingly, Cohen’s case is not a lone example of AI’s entanglement with legal proceedings. Earlier, in New York, two lawyers faced fines for including ChatGPT-generated bogus court cases in a legal brief. Furthermore, the legal team for rapper Pras Michél used AI to draft arguments for a new trial following a guilty verdict.
A Cautionary Tale for the Digital Age
This intriguing incident with Michael Cohen and Google Bard serves as a modern-day cautionary tale. It highlights the increasingly blurred lines between technology and human judgment. As we navigate this new digital landscape, it becomes crucial to approach AI with a blend of curiosity and caution.
- Verify Before You Trust: Always cross-check information, especially when it’s sourced from AI.
- Understand the Tools You Use: Grasp the capabilities and limitations of AI tools like Google Bard and ChatGPT.
- The Balance of Human and AI Collaboration: Use AI as an aid, not a replacement, for human expertise and judgment.
Michael Cohen’s accidental reliance on AI-generated court cases is a fascinating example of the complexities and pitfalls of our digital era. It serves as a reminder to approach emerging technologies with both eagerness and a healthy dose of skepticism, ensuring that we harness their power without falling prey to their limitations.