What Not to Do with AI

-- by B. Keith Geddings II

AI can pair wine with your dinner party entrée, color-match a jacket to your favorite shirt, and even maximize your itinerary for 48 hours in Lisbon, but it is not your lawyer. In fact, we’re just beginning to unearth the legal implications of entrusting AI chatbots with your confidential information. This is what we’re learning from the early court decisions. 

A legal case decided on February 17th has begun to help us understand how AI chatbots should, and more pressingly should not, be used when it comes to information that you want to protect (in the legal sense) 

What are the key concepts here? Before we delve into the details of the case, let’s start by explaining what the attorney-client privilege and attorney work product doctrines are—these terms can be confusing. Attorney-client privilege protects communications that you have with your attorney that are made for the purpose of giving and receiving legal advice. This includes both oral and written communications. The attorney work product doctrine protects materials that are prepared by your attorney or their representatives in preparation for litigation. Together, the doctrines prevent third parties from discovering communications or documents that are deemed to be privileged or attorney work product. Even if you’re not actively involved in litigation, it is prudent to maintain privilege in communications with your attorney in case you find yourself in a dispute with investors or defending a lawsuit in the future.

What was this case? Now to the case. In United States v. Heppner, a defendant who knew he was being investigated by the government used the publicly available version of Claude (Anthropic’s AI chatbot) to produce documents outlining a defense strategy, including arguments he might use to counter the charges he anticipated the government would bring against him. After the defendant was indicted, he attempted to assert privilege over the documents he prepared with Claude, and claimed that they were subject to the attorney work product doctrine. In other words, he didn’t want to turn them over to the other side. 

In asserting that the documents produced using Claude were protected by privilege, the defendant argued that some of the information he entered into Claude was information he had learned from his counsel, that he created the documents in Claude in part for the purpose of speaking with counsel to get legal advice, and that he shared the content of the documents he produced using Claude with his counsel. When evaluating these arguments, the court noted that to maintain privilege, a communication (i) must be between a client and their attorney, (ii) must be intended to be, and in fact, kept confidential, and (iii) must be for the purpose of obtaining legal advice. The court reasoned first that AI is not an attorney capable of a trusting, privileged relationship, second that Anthropic’s privacy policy states that it may disclose personal data to third parties, ruining any expectation of confidentiality, and third that the defendant did not use Clause for legal advice because his counsel did not direct him to use it and because, if asked, Claude will tell users that it is not a lawyer and cannot provide formal legal advice. Therefore, the documents were not protected by attorney-client privilege. 

In analyzing the defendant’s claim that the documents generated by Claude were subject to the attorney work product doctrine, the court reasoned that they were not prepared by or at the direction of the defendant’s counsel. The documents also did not reflect the strategy of the defendant’s counsel. Failing to meet these requirements, the court held that the documents were not protected by the attorney work product doctrine.  

What does this mean for legal defendants? While this case makes it clear that defendants should not use publicly available versions of AI chatbots such as Claude to construct a legal defense unless instructed to do so by their attorney, it leaves other relevant questions unanswered. What would have happened if the defendant had been instructed by his attorney to use Claude, and if he had employed an enterprise version of the Claude that didn’t allow Anthropic to see the inputs or disclose them to third parties? What if he hadn’t created standalone documents using Claude? What if he had taken advice his attorney had given him and followed his attorney’s instruction to enter that into Claude? Would it have destroyed the privilege? What if the attorney had used Claude? It is possible that the court would have ruled differently. As of now, we don’t know for certain.  

What does this mean for you? There are two distinct takeaways that provide sensible guidance going forward. First, AI chatbots are not lawyers (you already knew this, but we have to say it anyway), and if you are interacting with a publicly available chatbot the same way you would with a lawyer, your conversation will not receive the same protection as it would if you were having it with a real, human attorney. Second, it is important to pay attention to privacy policies. The court in Heppner quoted Anthropic’s privacy policy, which clearly states that the information entered into Claude can be disclosed to a third party. The court also cited another case regarding OpenAI, in which it was reasoned that users have no privacy interests in information voluntarily disclosed in conversations with a platform if the platform retains those records in the normal course of business. In future cases, courts may pay similarly close attention to the terms that users accepted when agreeing to a chatbot’s privacy policy. It’s important to understand the terms of the privacy policy that you agree to.

Looking forward. We expect the interaction between AI chatbots and the privilege and attorney work product doctrines to continue evolving as more cases like this are heard by the courts. We are going to continue to monitor this space closely, as it relates to our clients and to our firm. We are early adopters when it comes to considering new technology, but we are incredibly thoughtful when it comes to rolling them out internally. Ours and our clients’ privacy and the protection afforded our conversations with our clients are paramount.

This website may use cookies for functional and performance purposes. We do not sell your information to any third parties. By continuing to use this site, you accept our use of cookies. Please read our Terms and Conditions and Privacy Policy for full details.