Two Judges, One Day, Two Privilege Outcomes
What Heppner and Warner Tell Us About Using AI in Legal Work
đ Join thousands of others following and learn more about all the complex and fun legal topics I will cover. Subscribe below!
Some days you wake up and the world feels normal.
February 10, 2026 was not one of those days.
On that single Wednesday, two federal judges reached very different conclusions about whether legal materials involving AI are protected from discovery. The two cases (United States v. Heppner and Warner v. Gilbarco) didnât get coffee together beforehand, yet they walked out of court with opposite takes on the big question: When is the stuff your legal team creates with AI still privileged or protected?
One judge said no protections at all for AI-created materials. The other said yes, protect them like traditional work product.
What Happened in Heppner (S.D.N.Y.)
In United States v. Heppner, the defendant was under federal indictment and subpoena. He took information from his counsel, plugged it into a publicly accessible AI tool (Anthropicâs Claude), and generated 31 documents outlining defense research and strategy. Then he shared those documents with his lawyers.
Judge Jed S. Rakoff held the resulting materials are not protected by:
Attorney-client privilege: Claude is not a lawyer and there was no attorney-client communication through it.
Work product doctrine: The materials were created by the defendant on his own initiative, not at counselâs direction, and didnât reflect counselâs mental strategy at the time they were generated.
A key part of the courtâs reasoning was that the specific AI platformâs privacy policy permitted collection, retention, and potential disclosure of user inputs and outputs, meaning Heppner could not reasonably expect confidentiality.
The judge also made an important point that still has attorneys taking multiple sips of coffee when they read it: simply sharing documents with counsel after they were created by a non-lawyer does not magically make them privileged.
Meaning old rules clearly applied: Privileged communications are between clients and attorneys with confidentiality intact. These werenât.
But Then Thereâs Warner (E.D. Mich.)âŠVery Different Result
Meanwhile, in Warner v. Gilbarco in the Eastern District of Michigan, a judge denied a motion to compel production of materials related to the use of generative tools in litigation prep, treating them as work product under Federal Rule of Civil Procedure 26(b)(3).
Hereâs the takeaway from the Michigan courtâs view: When generative tools are used as part of lawyersâ litigation preparation, and those materials reflect strategy or direction from counsel, they can be treated the same way courts treat traditional work product, even if AI was involved.
The Warner judge emphasized that tools like ChatGPT are âtoolsâ rather than persons, and using them as part of legal research or drafting doesnât inherently waive protections, particularly when the work product has not been widely disclosed and remains part of counselâs strategic thinking.
The Distinction
If you glance at the headlines, you might think courts are hopelessly split on âAI and privilege.â
Theyâre not.
Both rulings stick to classic doctrines, attorney-client privilege and work product, and donât create new laws about technology. What differs is the context:
Who initiated the work?
In Heppner, it was the defendant on his own.
In Warner, it was in the course of litigation preparation under counselâs direction.
Were confidentiality expectations preserved?
In Heppner, the AI platformâs own policies undermined confidentiality.
In Warner, the context was within counselâs controlled processes.
Whose mental processes are reflected?
Material that reflects an attorneyâs strategy is more likely to be protected than material generated independently by someone without counsel involvement.
Thatâs privilege doctrine 101, with the courtroom deciding how normal-world rules interact with tools that werenât around when many of those doctrines were written.
What This Means for You
If you think the takeaway from these cases is ânever use AI,â youâll be deeply frustrated, and also missing what the courts are telling us.
What these rulings tell us, and this is legal-as-heck practical, is that technology choices are important because they affect fundamental legal prerequisites like confidentiality and attorney involvement.
If you want protections to stick:
Keep counsel in the driverâs seat. If legal materials are AI-assisted, make sure counsel directs the creation, not the client acting on their own.
Document the context. Material created under counsel direction and maintained confidentially is more likely to meet privilege/work product standards.
Watch privacy policies closely. If the AI toolâs terms allow data pooling, training, or third-party disclosure, courts may treat that as a waiver or lack of confidentiality.
Put policies in place. Your internal AI usage policy should require enterprise-grade tools with contractual confidentiality promises for legal work.
Train the users. An employee who thinks âAI is fine because I emailed it to counsel laterâ is going to create real discovery headaches.
In short: privilege protections arenât gone just because AI is involved, but youâre more likely to lose them if the technology obscures the legal context in which the work was done.
