Is Anyone Actually in Charge of AI at Your Firm?
Your firm is already using AI. The question is whether anyone is in charge of it.
Not in the theoretical, "we should probably look into this" sense. Right now, today, attorneys and paralegals on your team are feeding documents into AI tools, generating outputs, and acting on those outputs without a governance framework in place to manage what happens when something goes wrong. And something will go wrong. The only variable is whether your firm has a plan for it or whether you find out about the problem after the damage is done.
Legal operations has been quick to embrace AI as a productivity solution. That enthusiasm is understandable. The tools are impressive, the efficiency gains are real, and the competitive pressure to modernize is not going away. But enthusiasm without governance is not innovation. It is exposure.
Ask yourself the following questions and answer them honestly:
- Does your firm have a written policy that specifies which AI tools are approved for use and which are not?
- Do your attorneys and staff know exactly what categories of information cannot be entered into a general-purpose AI platform?
- Is there a designated person or team accountable for the quality and accuracy of AI-generated work product before it reaches a client or a court?
- Have you conducted a risk assessment of the AI tools currently in use across your firm?
- Do you know whether any of those tools operate as a black box, with no way to audit or verify how the output was produced?
If you answered no to most of those, you are not alone. But you are also not in a safe position.
This post is not about whether legal teams should use AI. They should, and many already are doing it well. This is about the part that most legal operations leaders are skipping, along with the professional, ethical, and operational consequences of that gap.
The Governance Gap Nobody Is Talking About
AI governance is one of the most actively discussed topics in technology, risk management, and regulatory circles right now. International standards bodies have published frameworks for it. The European Union has legislated around it. The National Institute of Standards and Technology has built a comprehensive risk management framework specifically addressing it. And yet, walk into most law firms or legal operations departments and ask who owns AI governance, and you will likely be met with a blank stare or a referral to the IT department.
That disconnect is the problem.
The legal industry has a long tradition of being cautious about adopting new technology. For years, that caution was treated as a liability, evidence that law firms were slow, resistant to change, and behind the curve. Then AI arrived, and something shifted. Suddenly firms were eager to demonstrate they were forward-thinking. AI tools were piloted, subscriptions were purchased, and press releases went out about innovation initiatives. What did not keep pace with any of that was the governance infrastructure needed to make AI adoption responsible.
The result is a sector that moved from being too slow to adopt technology to adopting it without the guardrails that make it sustainable.
This is not a criticism of any individual attorney or paralegal who reached for an AI tool to get their work done faster. That is a rational response to real workload pressure. The failure is at the organizational level, with legal operations leaders who have not yet treated AI governance as a core responsibility of their function.
The tools are in the building. The question is whether the framework is.
Right now, in firms across the country, employees are uploading client documents to general-purpose AI platforms without formal authorization. Attorneys are relying on AI-generated research without a structured review process to validate accuracy. Work product is being produced with AI assistance and delivered to clients with no disclosure and no audit trail. None of this is happening because people are careless. It is happening because no one has built the system that tells them what the boundaries are.
That system is AI governance, and in most legal operations environments it does not exist in any meaningful form.
Mistake #1: Client Data Going Into AI Tools With No Framework
Here is a scenario that is not hypothetical. An attorney needs to draft a motion. They are under time pressure, they have a tool available that can help, and so they upload the relevant case documents into a general-purpose AI platform and ask it to generate a first draft. The output is useful, the motion gets filed, and the attorney moves on to the next matter. Nobody flags it. Nobody reviews what just happened from a data governance perspective. And the client whose confidential information was just fed into a third-party AI system has no idea it occurred.
This is happening. And it carries serious professional responsibility consequences that most legal operations teams are not accounting for.
The ABA Model Rules of Professional Conduct are not ambiguous on the question of client confidentiality. Rule 1.6 requires attorneys to make reasonable efforts to prevent the unauthorized disclosure of client information. Rule 1.1 requires competence, and the comments to that rule have been updated to make clear that competence includes understanding the benefits and risks of relevant technology. Rule 5.3 places supervisory responsibility on attorneys for the conduct of non-attorney staff, which means that when a paralegal uploads a client file to an unauthorized AI tool, the supervising attorney carries exposure for that decision.
ABA Formal Opinion 512, issued in 2023, addressed generative AI directly. It reinforced that attorneys have an obligation to understand how the AI tools they use handle data, including whether inputs are stored, used for model training, or accessible to third parties. Using a general-purpose AI platform that retains user inputs without obtaining informed client consent is not a gray area under that opinion. It is a compliance failure.
The risk does not stop at professional responsibility. Many general-purpose AI tools, including some of the most widely used platforms, incorporate user inputs into their training processes unless specific enterprise agreements are in place to prevent it. That means confidential client information could become part of the data that trains a model accessible to other users. The legal industry's obligations around privilege and confidentiality make that a significant exposure, not just an abstract technical concern.
What makes this mistake so persistent is that it does not feel like a mistake in the moment. The attorney or paralegal using the tool is trying to do their job well. They are looking for efficiency and accuracy. The problem is not their intent. The problem is that no one in their organization has told them which tools are approved, what information cannot be entered into those tools, or what the consequences are when those boundaries are crossed. That is a governance failure, and it sits with legal operations leadership, not with the individual employee reaching for a useful tool under deadline pressure.
Building a compliant AI use framework starts with a straightforward set of decisions that most firms have not yet made. Which AI tools are approved for use in this organization? What categories of information are prohibited from being entered into any AI platform? What client disclosure obligations apply when AI is used in the preparation of work product? Who is accountable for ensuring compliance with those standards across the firm?
Mistake #2: Automation Complacency
An attorney needed to calculate court deadlines for an active case. Rather than working through the applicable rules manually, they used a general-purpose AI tool to run the calculations. The dates came back quickly, they looked right, and the attorney moved forward. The problem was that the dates were wrong. Deadlines were missed. The judge in that case was understanding and no formal sanctions followed, but the margin between that outcome and a significantly worse one was not skill or preparation. It was luck.
That story is not an indictment of AI tools. It is an indictment of how legal professionals are being allowed to use them without a framework that accounts for their limitations.
One of the most consistent and well-documented failure modes of large language models is hallucination. This is the tendency of AI systems to generate outputs that are confident, coherent, and incorrect. The problem is particularly acute in legal contexts because legal work depends on precision. A deadline that is off by one day is not approximately correct. A case citation that does not exist is not close enough. A summary of a witness interview that mischaracterizes what was said is not a minor drafting issue. In each of these scenarios, the output looks authoritative and complete, and that appearance of authority is exactly what makes the error dangerous.
What compounds this problem is a shift in behavior that tends to follow AI adoption. When people begin using AI tools regularly, they naturally start to reduce the level of scrutiny they apply to the outputs. This is not laziness. It is a psychological response to delegation. When you hand a task to someone or something, you instinctively begin to trust that the task has been handled. The review that follows becomes more cursory over time, because the assumption is that the tool is doing its job.
In legal operations, that assumption is professionally indefensible.
ABA Rule 1.1 requires competent representation, and the comments make clear that this includes understanding the limitations of the technology being used in legal work. An attorney who relies on an AI-generated output without applying independent professional judgment to verify its accuracy is not meeting that standard, regardless of how sophisticated the tool is. The existence of an AI tool does not transfer professional responsibility to the software. It remains with the attorney.
This is also addressed in ABA Formal Opinion 512, which is explicit that attorneys must review AI-generated work product before it is used or delivered. Supervision is not optional. The opinion does not carve out exceptions for time pressure or workload. If the output is being used in connection with a client matter, a human with the appropriate professional judgment must have reviewed it.
The paralegal drafting scenario illustrates a version of this that is particularly easy to miss. A paralegal uses an AI tool to summarize a witness interview. The summary is submitted to the supervising attorney. The attorney, trusting the process, does not read it carefully against the original interview notes. The summary mischaracterizes a key statement the witness made. That summary then informs decisions about how the case is prepared or tried. The error does not announce itself. It quietly shapes the work downstream until something surfaces it, and by then the consequences may already be in motion.
Under ABA Rule 5.3, the supervising attorney is responsible for ensuring that non-attorney staff work complies with the Rules of Professional Conduct. A paralegal over-relying on AI tools without adequate supervision is not just a workflow problem. It is a professional responsibility problem for the attorney above them in the chain.
Legal operations leaders need to treat AI review not as a formality but as a structured step in the workflow. That means building verification checkpoints into any process that incorporates AI-generated output. It means training staff on what hallucinations look like and how to catch them. It means establishing the expectation clearly and repeatedly that AI produces a starting point, not a finished product, and that the human review step is not optional or cursory. It is where professional judgment lives. The efficiency gains from AI are real and worth pursuing. But efficiency that comes at the cost of accuracy in legal work is not a gain. It is a liability that has not been collected yet.
Mistake #3: No Accountability Structure
When an AI tool produces an output that causes harm, one of the first questions that will be asked is who was responsible for that output. In most legal operations environments right now, there is no clear answer to that question. That is not a gap in understanding. It is a structural failure with real consequences.
The accountability problem in legal AI adoption has two dimensions. The first is internal. The second is technical. Together, they create a situation where significant decisions are being made with AI assistance and nobody owns what happens as a result.
On the internal side, most firms that have adopted AI tools have not established clear lines of responsibility for how those tools are used and what happens when they produce bad results. There is no designated owner of AI governance. There is no review committee evaluating which tools are approved. There is no escalation path when an attorney or paralegal suspects that an AI output is inaccurate or inappropriate. Individual employees are making judgment calls about AI use in isolation, without policy guidance, without training, and without a structure that tells them who to go to when something does not look right.
This is the environment in which the costly mistakes happen. Not because anyone intended to act negligently, but because the organization never built the system that would have caught the problem before it became one.
The technical dimension of this problem is equally serious and far less understood in legal circles. A significant number of AI tools currently in use, or being evaluated for use, in legal environments operate on what is commonly referred to as a black box model. This means that the internal reasoning process the tool uses to arrive at an output is not visible or auditable by the user. You see the input you provided and the output you received. You do not see how the system moved from one to the other, what data it weighted, what sources it drew from, or what it may have gotten wrong along the way.
For legal work, this is a profound problem. Legal reasoning depends on being able to trace a conclusion back to its source. An attorney advising a client or arguing before a court needs to know not just what the answer is but why it is the answer and where it comes from. A black box tool cannot provide that. It can produce an output that looks like legal reasoning without providing any mechanism for verifying that the reasoning is sound.
When you combine a black box tool with an organization that has no accountability structure around AI use, you have a situation where consequential legal work is being shaped by a process that nobody can fully explain or audit, and nobody has been designated to take responsibility for. That is an untenable position for any firm with professional responsibility obligations.
This is one of the areas where established governance frameworks from outside the legal industry offer direct and practical guidance. The NIST AI Risk Management Framework provides a structured approach to identifying, measuring, and managing the risks associated with AI systems. It is organized around four core functions: govern, map, measure, and manage. Applied to a law firm context, those functions translate into establishing organizational accountability for AI, identifying where AI is being used and what risks those uses carry, evaluating how well those risks are being controlled, and taking action when the controls are insufficient.
The ISO 42001 standard, the international standard for AI management systems, takes a similar approach and is specifically designed to help organizations build internal structures that ensure AI is used responsibly and with appropriate oversight. Certification against ISO 42001 is increasingly being used as a signal of AI governance maturity in enterprise environments, and legal operations leaders would benefit from understanding its requirements even if formal certification is not an immediate goal.
The EU AI Act, which is now in force and beginning its phased implementation, introduces a risk-based regulatory framework for AI systems operating in or affecting the European Union. It classifies AI applications by risk level and imposes obligations that scale with that classification. While its direct applicability to US-based law firms depends on their client base and jurisdictional reach, the framework it establishes represents the direction that AI regulation is moving globally. Legal operations leaders who understand it now will not be scrambling to catch up when comparable frameworks arrive in US jurisdictions, and there is every reason to expect that they will.
What Good Governance Actually Looks Like
Governance sounds like bureaucracy. In practice, it is the difference between an organization that uses AI confidently and one that uses it nervously and hopes nothing goes wrong. For legal operations teams that want to be in the first category, the path forward is not complicated, but it does require intentional action.
Here is what a functional AI governance framework looks like in a legal operations context.
Start with an inventory.
Before you can govern AI use, you need to know what AI is actually being used in your organization. This is harder than it sounds. Attorneys and staff are not always transparent about the tools they reach for under deadline pressure, and many AI capabilities are now embedded in platforms that legal teams already use, including document management systems, research tools, and productivity software. A governance framework that only accounts for the tools leadership approved misses everything happening in the gaps. The inventory process should identify every AI tool in active use across the firm, who is using it, what it is being used for, and what data is being entered into it. That baseline is the foundation everything else builds on.
Establish an approved tool list with documented criteria.
Once you know what is being used, you need to make deliberate decisions about what should be used. Not every AI tool that is technically available is appropriate for legal work. The criteria for approval should include at minimum how the tool handles user inputs and whether those inputs are stored or used for model training, whether the tool has an enterprise agreement that provides appropriate data protections, whether the tool's outputs are auditable or whether it operates as a black box, and whether the vendor's security and privacy practices meet the standards your firm's clients expect. Tools that do not meet those criteria should be explicitly prohibited, and that prohibition needs to be communicated clearly and enforced.
Define what information can and cannot go into AI tools.
This is one of the most critical and most neglected elements of legal AI governance. Attorneys and staff need explicit guidance on what categories of information are off limits for AI tools, particularly general-purpose platforms that do not have enterprise-grade data protections in place. At minimum, that guidance should prohibit entry of personally identifiable client information, privileged communications, confidential case strategy, and any information subject to a protective order or confidentiality agreement into any AI platform that has not been specifically approved for that category of data. Telling people to use good judgment is not a policy. Telling them that no client name, case number, or document produced in litigation may be entered into a general-purpose AI platform is a policy.
Build human review into every AI-assisted workflow.
Governance is not just about what tools people use. It is about how they use them. Any workflow that incorporates AI-generated output needs a structured human review step before that output is acted on, delivered to a client, or filed with a court. For research outputs, verification means checking citations against primary sources and confirming that the legal propositions stated in the output are accurate. For document drafts, it means reading the output against the underlying facts and instructions rather than assuming the tool captured them correctly. For deadline calculations, it means independently confirming dates against the applicable rules rather than accepting an AI-generated answer at face value.
Designate accountability.
Every AI governance framework requires a human owner. Someone in the organization needs to be responsible for maintaining the approved tool list, receiving and investigating reports of AI-related issues, ensuring that policies are current as the technology evolves, and serving as the escalation point when attorneys or staff have questions about appropriate AI use. In larger firms, this may be a dedicated role or a committee. In smaller environments, it may be an existing legal operations leader taking on expanded responsibility. What it cannot be is nobody. An accountability structure with no designated owner is not an accountability structure. It is a placeholder for a decision that has not been made.
Conduct regular risk assessments.
AI governance is not a one-time project. The tools are evolving rapidly, the regulatory environment is changing, and the ways your firm uses AI will expand over time. A governance framework that was adequate six months ago may have meaningful gaps today. Regular risk assessments, conducted at least annually and whenever a significant new AI tool or use case is introduced, should evaluate whether current policies are adequate, whether approved tools continue to meet the criteria established for their approval, whether any new risks have emerged that the existing framework does not address, and whether staff training is keeping pace with how AI use is actually evolving in the firm.
AI Governance Checklist for Legal Operations
35 obligations mapped to ABA Rules 1.1, 1.6, and 5.3, Formal Opinion 512, and the NIST AI Risk Management Framework. Check where your firm stands across all eight categories. Progress saves automatically in your browser.
Use the checklist →AI Is Not the Risk. Ungoverned AI Is.
The legal industry is not going to stop using AI. That decision has already been made, not in any boardroom or bar association meeting, but in the daily choices of attorneys and paralegals who are reaching for tools that make their work faster and more manageable. The question that remains open is whether the organizations they work for are going to build the structures that make that use responsible.
The mistakes covered in this post are not the result of bad intentions. They are the result of an industry that moved quickly to adopt a powerful technology without moving equally quickly to govern it. Client data is going into platforms it should never touch. Work product is being accepted from AI tools without the review those tools require. And when something goes wrong, there is no clear answer to the question of who was accountable for making sure it did not.
The frameworks exist. The NIST AI Risk Management Framework gives organizations a structured methodology for identifying and managing AI risk. ISO 42001 provides a governance standard built specifically for responsible AI management. The EU AI Act establishes a regulatory model that reflects where global AI oversight is heading. The ABA has published explicit guidance through Formal Opinion 512 and the existing Model Rules that make clear what professional responsibility requires of attorneys using these tools. None of this is new information. What is missing in most legal operations environments is the organizational will to act on it.
Legal operations as a function exists to bring structure, efficiency, and accountability to the delivery of legal services. AI governance is not a detour from that mission. It is an expression of it. The legal ops leader who builds a functional AI governance framework is not slowing their firm down. They are building the foundation that allows their firm to use AI confidently, sustainably, and without the exposure that comes from hoping nothing goes wrong.
The firms that treat AI governance as an operational priority now will be the ones that are not scrambling to respond when a client data incident surfaces, when a bar complaint gets filed, or when regulatory frameworks arrive in US jurisdictions and require documentation of governance practices that do not yet exist.
The tools are already in the building. The only real question left is whether you are going to be the person who built the framework around them, or the person who explains to a client why you did not.