Anything You Tell an AI Can Be Used Against You in Court, Lawyers Warn

Anything You Tell an AI Can Be Used Against You in Court, Lawyers Warn

As artificial intelligence becomes a go-to sounding board for everyday problems, U.S. lawyers are issuing an urgent warning to their clients: do not treat chatbots like trusted confidants if your legal liability or freedom is on the line.

Because AI chatbots are not licensed attorneys, sharing sensitive case details with them could effectively destroy attorney-client privilege, allowing prosecutors and litigation adversaries to demand your chat logs in court.

The Catalyst: The New York Ruling The urgency surrounding this issue skyrocketed following a recent ruling by Manhattan-based U.S. District Judge Jed Rakoff.

  • The Case: Bradley Heppner, the former chair of a bankrupt financial services company facing securities and wire fraud charges, used Anthropic’s Claude to prepare reports about his case for his defense team.
  • The Ruling: Heppner’s lawyers tried to withhold the AI exchanges, but prosecutors argued that because defense lawyers weren’t directly involved—and because attorney-client privilege doesn’t apply to bots—they had a right to the material. Judge Rakoff agreed, ordering Heppner to hand over 31 documents generated by Claude.
  • The Reasoning: Rakoff explicitly stated that no attorney-client relationship exists, “or could exist, between an AI user and a platform such as Claude.” Furthermore, both OpenAI and Anthropic’s terms of service state that users have no expectation of privacy in their inputs.

A Conflicting View: The Michigan Ruling The courts are still actively grappling with how to treat AI, leading to some contradictory decisions. On the exact same day as Rakoff’s ruling, U.S. Magistrate Judge Anthony Patti in Michigan made a completely different call.

In a lawsuit brought by a woman representing herself against her former employer, the judge ruled she did not have to hand over her ChatGPT logs. Judge Patti treated the AI chats as her personal legal “work-product,” noting that generative AI programs “are tools, not persons” that the employer could cross-examine.

How Lawyers Are Setting Guardrails With the law still unsettled, more than a dozen major U.S. law firms are proactively updating client contracts and issuing strict guidelines to prevent accidental data leaks:

  • Beware of Third Parties: Firms are explicitly stating in hiring agreements that feeding a lawyer’s advice into a third-party AI platform may constitute a waiver of attorney-client privilege.
  • Use Closed Systems: Lawyers suggest that “closed” AI systems designed specifically for secure corporate use might offer stronger protections than public chatbots, though this remains largely untested in court.
  • Strategic Prompting: If a client must use AI for legal research at their lawyer’s behest, firms like Debevoise & Plimpton suggest starting the prompt with a specific disclaimer: “I am doing this research at the direction of counsel for X litigation.”

Until the courts establish universal rules regarding artificial intelligence as evidence, attorneys are advising clients to stick to the golden rule of litigation: Do not talk to anyone about your case except your human lawyer.

Leave a Reply

Your email address will not be published. Required fields are marked *