Are You Violating Your NDA or Ethical Obligations by Using AI Tools?

As artificial intelligence tools like ChatGPT, Copilot, and Gemini become more commonly used in professional environments, a real question exists as to whether their use is exposing organizations and licensed professionals to legal and ethical risk.

The issue is simple, but often overlooked: pasting confidential or proprietary information into AI tools may violate your contractual obligations under a non-disclosure agreement (NDA) or your professional duty of confidentiality.

The NDA Problem

Most NDAs prohibit disclosing confidential information to third parties. That includes external contractors, vendors, and yes—cloud-based AI platforms.

When you paste language from a confidential agreement, client memo, internal strategy document, or draft policy into a public AI tool, you may be transmitting that information to a third-party provider that is outside the scope of your NDA. This can constitute a breach, regardless of whether the AI tool stores or shares the information later.

The issue isn’t about bad intent or misuse of data. The legal exposure can stem from the unauthorized transmission itself.

Many NDAs include language along these lines:

“Recipient shall not disclose Confidential Information to any third party without the prior written consent of Discloser…”

Sending that data to an AI system run by a third-party cloud provider could meet the definition of “disclosure,” even if it’s for the purpose of rephrasing, summarizing, or analyzing the content.

Confidentiality Obligations for Professionals

For attorneys, accountants, physicians, and other licensed professionals, the analysis doesn’t end with contract terms. Ethical and fiduciary duties also apply.

For lawyers, Model Rule 1.6 of the ABA Rules of Professional Conduct prohibits the disclosure of client information unless the client provides informed consent or the disclosure is otherwise permitted.

Using AI tools in a way that transmits confidential client information—without safeguards or client consent—may violate professional rules, even if done with good intentions. The fact that the platform doesn’t “store” the data permanently doesn’t change the analysis. The focus is on whether reasonable care was taken to avoid disclosure.

Practical Tips to Avoid Breach

While AI tools can provide real value, they must be used with care. Here are a few guidelines to mitigate risk:

Avoid pasting unredacted confidential or proprietary material

If the information isn’t public and wasn’t created by you independently, assume it’s protected and don’t paste it into an AI tool unless you’re certain you have the right to do so.

Use enterprise-grade tools with contractual protections

Consumer-facing versions of AI platforms may not offer the privacy guarantees your situation requires. Consider enterprise versions that explicitly commit not to train on or store data—and confirm this in writing.

Implement internal AI use policies

Many NDA breaches are unintentional and stem from unclear internal practices. Establish clear written guidance for employees and contractors regarding what may and may not be input into AI platforms.

Train your team

Education is essential. Make sure anyone bound by an NDA—or who handles client or proprietary information—understands how AI tools work and what constitutes a potential breach.

If you’re a licensed professional, understand your ethical obligations

Attorneys, in particular, should treat AI prompts with the same caution they would give to communications with nonprivileged third parties. When in doubt, redact or don’t input the information. Further, consider adding AI disclosure practices to client engagement letters.

Final Thought

Confidentiality and NDAs are not suspended simply because a new tool appears on the market. The same principles apply—perhaps with even more urgency, given the ease with which information can now be transmitted to external systems.

We work with a wide range of businesses, professional firms, and technology providers to help them adopt new tools without creating unnecessary legal risk. This is particularly important in digital transformation and AI integration projects where new systems and processes are being developed at scale. If your organization uses AI—or is planning to—it’s worth reviewing your confidentiality agreements and internal policies before you go further. Contact Us, we can help.

Our office is closed for the holidays

We will be back on January 5, 2026

We wish everyone a happy and peaceful holiday season!