A U.S. lawyer has been publicly called out for submitting AI-generated citations and quotes in a Supreme Court filing—and then doubling down with more AI hallucinations in their defense. According to IGN, the attorney’s response to the judge’s inquiry included additional fabricated references, all produced by artificial intelligence.
This isn’t just a tech misstep. It’s a breach of professional trust. A moment where an easy out overrode accountability, and the courtroom became a cautionary tale about the limits of using a machine for important actions.
The incident began when the lawyer submitted a legal brief to the U.S. Supreme Court containing citations and quotes generated by AI—many of which didn’t exist; those are called hallucinations created by the program. When questioned by the judge, the attorney responded with a defense that included further AI-generated material, making the error even more glaring.
This wasn’t a one-off mistake. It was a feedback loop of false authority dressed in a legal costume for Halloween. And in a space where precision is needed or else legal consequences will happen, and those consequences were swift.
The judge’s response was clear: “This is not how we practice law.” That line should be writen in the ledgers of all books that have to do with AI and the Law. To be boundary drawn between human discernment and synthetic confidence.
The lawyer claimed they were unaware AI could fabricate citations. Apparently they don’t read the fine print when it comes to using AI, something that everyday people already know. But in words repeated by many a judge in the U.S., ignorance isn’t an excuse.
Legal systems aren’t just bureaucratic—they’re a symbol of order. They rely on precedent, precision, and the weight of language. When AI is used without verification, it doesn’t just introduce error—it fractures the trust that people hold in the legal system. And when that happens in the Supreme Court, the fallout isn’t just professional—it’s a game changer for many.
This case isn’t about banning AI. It’s about boundaries. About remembering that tools are not all knowing creations. And that in life-changing systems—law, medicine, storytelling—truth isn’t optional.
AI can assist. It can accelerate. But it cannot replace the human verification. This lawyer chose to leave the tedious work to the AI, which isn’t bad itself, but did not work with the tool effectively in order to achieve results. As such, he’s probably fractured the trust that citizens have with their laywers, and whethers their cases are properly handled or just handed off the the nearest AI system.
More must-reads:
Get the latest news and rumors, customized to your favorite sports and teams. Emailed daily. Always free!