Hidden AI and the Shadow Agent Problem: What Law Firms Need to Know
April 8, 2026

A managing partner at a 200-attorney firm recently discovered that their entire M&A deal repository — confidential client agreements, due diligence files, pricing strategies — had been accessible to Claude and Microsoft Copilot for months. They never activated anything. They never gave explicit permission. It happened because an associate connected their Microsoft 365 account to an AI assistant to help draft client updates, and that assistant inherited access to everything the associate could see.
The managing partner’s question was the same one every law firm leader should be asking right now: how many firms don’t know this is already happening?
Shadow AI in law firms is not a future problem. It is not something that becomes relevant when your firm formally adopts an AI strategy. It is happening in your environment today — through the tools your associates and paralegals are already using, through the Microsoft 365 licenses your firm purchased, and through the agents those licenses enable without your explicit sign-off.
This post explains what shadow AI actually means for a law practice, how agents access your data without a formal deployment decision, why law firms carry specific and severe risk when this exposure exists, and what your data estate analysis reveals about the scope of the problem.
What Shadow AI in Law Firms Actually Means
Shadow AI is the use of artificial intelligence tools inside your organization without IT approval, security review, or management awareness. It is the enterprise equivalent of employees using personal Dropbox accounts before cloud storage was officially approved — except the data involved is client files, privilege-protected communications, and matter strategy.
The term covers two distinct but related problems, and law firms need to understand both.
The first is shadow AI tools. These are applications employees install and use without authorization — a paralegal using ChatGPT to summarize deposition transcripts, an associate using Claude to draft motion language, a partner using an AI writing assistant to prepare client communications. In each case, the employee is copying data out of the firm’s environment and sending it to a third-party system with no visibility, no governance, and no audit trail.
The second — and more insidious — is the shadow agent problem. This is what happens when an authorized AI tool, one your firm may have legitimately licensed, operates beyond the scope of what management understood it would do. When a Microsoft Copilot license is activated, it does not just give an attorney an AI writing assistant. It creates an agent with access to everything in that attorney’s Microsoft 365 environment: their email, their SharePoint files, their OneDrive documents, their Teams conversations. Every folder they can open. Every document they have permission to view.
The agent does not have judgment. It does not recognize that a document is client-confidential or that a folder contains M&A strategy that should never leave the deal team. It simply accesses what it has permission to access — and it uses that information to respond to whoever is asking.
Law firms are especially vulnerable to both dimensions of this problem for three reasons. First, they hold some of the highest-value data of any organization — client strategy, deal structures, litigation positions, and pricing arrangements that competitors, counterparties, and bad actors would pay significantly to access. Second, law firm cultures have historically prioritized billable work over technology governance, which means IT and security functions are often under-resourced relative to the sensitivity of the data they are protecting. Third, AI adoption in legal practice is accelerating. Associates and partners are using these tools because they work, because clients are asking about them, and because falling behind on AI capability feels like a competitive risk. The governance conversation is happening after the adoption, not before it.
How Agents Access Your Data Without Permission
The mechanism by which agents access law firm data is not a vulnerability in the traditional sense. There is no breach. No one exploited a flaw in your security architecture. The agent is accessing data through permissions that already exist — permissions that were created when your Microsoft 365 environment was set up, when attorneys were added to matters, when paralegals were granted access to shared drives.
Here is what actually happens when an attorney connects an AI assistant to their Microsoft 365 account:
The AI assistant — Claude, Copilot, or any agent built on either platform — authenticates using the attorney’s credentials. Once authenticated, it inherits the attorney’s permissions. Everything the attorney can access, the agent can access. If the attorney is a partner with broad permissions across the firm’s SharePoint environment, the agent has broad permissions across the firm’s SharePoint environment.
The attorney asks the agent to help them draft a client update. The agent pulls from the attorney’s recent emails, their open matters in SharePoint, their meeting notes in Teams, and their document library in OneDrive. It synthesizes a draft. This is the intended use case and it works as designed.
But the agent did not stop reading when it had enough information to write the email. It indexed everything it could access. It built a contextual model of the attorney’s work — their clients, their matters, their strategies, their relationships. That model is now available to answer whatever questions the attorney asks.
And if the attorney’s permissions include access to other matters — which is common in firms that have not implemented strict matter-level permissions — the agent’s context includes those matters too.
There is a common misconception worth addressing directly. Many law firm leaders assume that because data lives inside the firm’s Microsoft 365 environment, it is protected. The data is not leaving the environment, so surely it is safe. This is not how agents work. The agent does not need to export your data to expose it. It exposes data by accessing it within your environment and surfacing information to users who ask the right questions — including users who should not have had access to that information in the first place.
Permission inheritance is the specific mechanism that makes this dangerous. In most law firm Microsoft 365 environments, permissions are set at the site or folder level and inherited down the hierarchy. When an agent operates with the permissions of a partner who has broad access, it inherits all of those permissions — including access to folders the partner can technically open but rarely does, subfolders with broken inheritance that expose files the partner did not know they could access, and external sharing links created years ago that nobody remembered to revoke.
The Law Firm Risk — Privilege, Confidentiality, and Liability
Law firms operate under some of the most demanding confidentiality obligations of any professional service. Attorney-client privilege is not just an ethical requirement — it is the foundation of the client relationship and a legal protection that courts take seriously. Shadow AI creates specific, concrete risks to all three dimensions of that obligation.
Client privilege violations: When an agent trained on one client’s matter files surfaces information in a response that draws on another client’s documents — even inadvertently — that is a potential privilege issue. The agent does not track matter boundaries. It uses whatever information it has access to in order to generate the most complete response. If an attorney’s permissions give the agent access to multiple client matters, the agent’s responses may cross-pollinate information in ways that no human reviewer would catch before it reached the client.
State bar exposure: Bar associations across the country are now actively examining how law firms are using AI. Several have issued guidance. Some have opened investigations. The specific questions regulators are asking focus on supervision of AI-generated work product, disclosure to clients when AI is used, and the security of data that AI tools access. A firm that cannot demonstrate governance over what AI tools are running in its environment and what data they can reach is in a materially weaker position in any regulatory inquiry.
Liability when confidential data is exposed: If a client’s confidential information is surfaced through an agent and reaches someone who should not have seen it — a different client, a counterparty, an employee outside the matter team — the firm’s defense cannot be “we did not know the agent had access.” Firms have a professional obligation to understand the technology operating in their environment. Ignorance is not a defense.
Competitive risk: Deal structures, pricing strategy, client lists, and relationship maps are the firm’s most strategically sensitive non-client data. Shadow AI creates a path by which this information can be accessed, surfaced, and potentially leaked — not necessarily through malicious intent, but through the mundane operation of an agent doing exactly what it was designed to do.
A Florida real estate practice we spoke with recently described the moment they realized their exposure. A paralegal had connected a productivity AI tool to their Microsoft 365 account to help manage scheduling and draft correspondence. The tool had indexed the firm’s shared drives as part of its setup. Nobody had authorized this. Nobody had reviewed what the tool could access. The firm discovered it only when the paralegal asked the tool a routine question and the response included information from a client file that had nothing to do with the question asked — because the tool had indexed everything it could reach.
They were fortunate the discovery happened internally.
You probably have more shadow AI exposure than you think. Find out in under an hour.
What Your Data Estate Analysis Reveals
The instinct when firms learn about shadow AI risk is to focus on the agents — shut down unauthorized tools, audit what AI is deployed, restrict access. These are reasonable responses. But they address the symptom rather than the root cause.
The root cause is that most law firms do not have a clear picture of their data estate: what data they have, where it lives, who can access it, and what that access structure looks like to an agent operating at scale.
You cannot build a safe agent environment on top of a data estate you do not understand. And you cannot understand your data estate until you look at it.
A data estate analysis for a law practice is not a theoretical exercise. It answers specific, concrete questions. Which SharePoint sites have broken permission inheritance? Which folders are open to everyone in the firm — attorneys, paralegals, receptionists, and contractors alike? Which OneDrive accounts have external sharing enabled with links that were created years ago and never revoked? Which users have access to matter files from clients whose matters closed two years ago?
These questions have answers. They are findable. But most firms have never looked for them because, before agents, the practical risk was limited. Humans with overprivileged access to a folder rarely read every file in that folder. They are too busy. Agents are not.
What a data estate analysis reveals is not just a list of problems. It reveals a map of your actual permission structure as it exists today — not as it was designed, not as your IT team believes it to be, but as it actually operates when an agent begins reading everything it can access.
That map is the starting point for building safe agent capability. You cannot restrict agents from reaching data you do not know is accessible. You cannot govern what you have not yet seen.
The Exposure Snapshot is designed specifically to produce that map — quickly, non-invasively, and with a prioritized list of what to fix first. It takes under an hour and requires no changes to your environment. You simply see what is there.
Shadow AI is not a future concern for law firms. It is a present reality in most environments, operating through tools attorneys and paralegals are already using, accessing data that firms believe is protected, and creating privilege, compliance, and liability exposure that accumulates quietly until something goes wrong.
The first step is not a policy. It is not an AI governance committee. It is visibility. You cannot make good decisions about agents, or about the tools your people are already using, without knowing what your data estate actually looks like.
That is what the Exposure Snapshot delivers. Under an hour, no disruption to your environment, a clear picture of where your exposure lives and what to fix first.
Get your free Exposure Snapshot. See exactly what AI can already reach in your environment — before your clients ask the question you are not yet ready to answer.
Recent Posts
Have Any Question?
Call or email Cocha. We can help with your cybersecurity needs!
- (281) 607-0616
- [email protected]
About the Author:
Steve Combs
Co-Founder & Managing Director, Cocha Technology
Steven is a fractional CIO/CISO with 30+ years of enterprise IT and security leadership. He has built AI governance frameworks for organizations with 1,700+ users, led enterprise Microsoft Copilot deployments, and conducted security assessments across law firms, energy companies, financial institutions, and PE-backed manufacturers.
