AI Use and Discovery Risk for Eugene and Springfield Law Firms
How Eugene and Springfield Law Firms Should Use AI Without Creating Discoverable Evidence
Law firm leaders across Eugene, Springfield, and the broader Willamette Valley are hearing the same message from vendors, peers, and even clients: use generative AI to move faster. Draft quicker. Research smarter. Do more with less.
The reality is more nuanced. Generative AI can absolutely improve productivity for law firms and other professional services organizations. But how AI is deployed and governed determines whether it becomes a competitive advantage or a new source of legal, regulatory, and discovery risk.
A February 2026 federal decision from the Southern District of New York, United States v. Heppner, offers a clear warning. The court ruled that conversations with a public AI platform were not protected by attorney-client privilege or the work-product doctrine and were therefore discoverable by the government.
For law firms and regulated businesses in Lane County, this is not a theoretical concern. It is a leadership and governance issue that managing partners, executive directors, and operations leaders need to address now, before a well-meaning employee or client pastes sensitive information into a public chatbot.
What the Heppner Court Actually Held and Why It Matters for Oregon Firms
In Heppner, the defendant created approximately 31 documents memorializing written exchanges with Anthropic’s public Claude AI platform. Those exchanges focused on defense strategy and legal arguments. The FBI later seized the documents pursuant to a search warrant, and the government sought a ruling that the “AI Documents” were not protected by privilege or work product.
Judge Rakoff agreed. On these facts, neither attorney-client privilege nor work-product protection applied. The court’s reasoning followed three well-established privilege principles.
1. No attorney-client communication
The court emphasized that Claude “is not an attorney.” Discussions of legal issues between non-attorneys are not privileged, even if the output later finds its way to counsel.
2. No reasonable expectation of confidentiality
The court relied heavily on the AI platform’s privacy policy, which allowed for the collection of inputs and outputs, potential model training, and disclosure to third parties, including government authorities. That structure defeated any reasonable expectation of confidentiality.
3. Purpose element failed
Because the defendant used the AI tool without direction from counsel, the court framed the question as whether the defendant intended to obtain legal advice from the AI itself. The platform expressly disclaimed providing legal advice, and sharing the results with counsel later did not transform non-privileged communications into privileged ones.
On work product, the court emphasized that the doctrine exists to protect attorneys’ mental processes. Documents created independently by a client, without counsel’s direction and not reflecting counsel’s strategy at the time, did not qualify.
Plain-English takeaway: If clients or staff use a public AI chatbot like a search engine or brainstorming partner to analyze legal strategy, those prompts and outputs can become loggable, seizable, and discoverable.
What Heppner Does Not Say (and Why Overreaction Is a Mistake)
It would be easy to summarize Heppner as “AI always waives privilege.” That is not the most accurate or useful interpretation.
Legal commentary has noted that the ruling was fact-specific. The case involved a public AI tool, consumer privacy terms, and no attorney direction. Other scenarios could be analyzed differently, particularly where AI is used as a tool within an attorney-directed workflow.
Leadership takeaway: Law firm leaders in Eugene and Springfield do not need to resolve academic debates about AI and privilege. What they need are clear policies, vetted tools, and operational controls that keep sensitive information out of public platforms and ensure AI is governed like any other vendor handling confidential data.
Ethics Guidance Was Already Pointing Here, Especially in Oregon
This is not just a New York issue. Ethics authorities have been signaling the same message.
- The American Bar Association’s Formal Opinion 512 emphasizes competence, confidentiality, supervision, communication, and reasonable fees when using generative AI.
- The Oregon State Bar’s Formal Opinion 2025-205 is particularly direct for Oregon firms, requiring lawyers to understand how AI tools store data, whether they train on prompts, and how confidentiality is preserved.
- Oregon also emphasizes candor and verification. Lawyers must verify AI-generated citations and legal assertions.
In short, Heppner did not create a new risk. It made the consequences of unmanaged AI use impossible to ignore.
What Lawyers and Professional Firms Should Not Do With AI
1. Do not enter privileged or sensitive facts into public AI tools
This is the Heppner scenario: public platform, third-party terms, and no enforceable confidentiality.
- Client names, facts, and timelines
- Strategy memos, deposition prep, witness outlines
- Draft settlement positions or damages models
- Anything under protective order, seal, or statutory confidentiality
2. Do not let clients “self-help” with AI during investigations or litigation
This is a client management issue, not just an IT issue. Engagement letters and litigation holds should clearly instruct clients not to paste case facts into ChatGPT, Claude, Gemini, or similar tools.
3. Do not rely on AI output without verification
A practical rule: treat AI like a summer associate. Useful drafts, never authority.
4. Do not assume “private” means “safe”
If a vendor can retain prompts, use them for training, or disclose them broadly, you are dealing with a third party.
What AI Tools Eugene and Springfield Firms Can Use Safely
Category A: Public AI tools
Appropriate only for non-client, non-confidential work such as marketing drafts, internal training materials, and summaries of public information.
Category B: Enterprise AI inside your environment
Many Eugene and Springfield organizations already use Microsoft 365. Enterprise AI tools can be appropriate when permissions, access controls, and governance are in place. These tools respect existing access models and do not train foundation models on organizational data.
Category C: Legal-specific AI tools under contract
Some legal AI platforms offer stronger contractual protections, but vendor due diligence and confidentiality review remain essential.
AI Governance Requires More Than a Tool
Effective AI governance depends on identity management, permissions, ethical walls, logging, retention, and vendor oversight. For many Eugene and Springfield organizations, these controls are maintained through managed IT rather than ad hoc internal efforts.
At Emerald, we help organizations across Lane County build practical AI strategy plans that fit their size and risk profile. That includes vetting AI use, defining where AI does and does not belong, and aligning new tools with existing IT controls rather than introducing unmanaged risk.
Bottom Line for Eugene and Springfield Law Firm Leaders
United States v. Heppner is not a reason to avoid AI. It is a reason to stop treating AI like a casual web search box when client confidentiality, regulatory obligations, and firm reputation are at stake.
For firms across Eugene, Springfield, and the Willamette Valley, AI can be a real advantage when paired with clear policies, vetted tools, clean permissions, and ongoing IT governance.
Call to Action
If your firm or organization is evaluating AI tools or already seeing them appear in daily workflows, Emerald helps Eugene and Springfield businesses design AI strategies that align with confidentiality, compliance, and operational reality. That includes vetting AI use, aligning it with managed IT controls, and ensuring your technology environment supports growth without creating new risk.
