Executive Summary

Click any blurb text to edit
Organization:

Check items above to generate your priority summary.

ABA Rules Professional Responsibility Obligations
ABA Rule 1.1 Competence
0 / 5

Lawyers must maintain the legal knowledge, skill, thoroughness, and preparation reasonably necessary for representation — including competence with relevant technology. Comment 8 specifically addresses the duty to keep current with technology changes.


ABA Rule 1.6 Confidentiality of Information
0 / 5

Lawyers must make reasonable efforts to prevent the unauthorized disclosure or use of client information. AI tools that process client data create disclosure risks that did not exist with traditional software. Reasonable precautions are required.


ABA Rule 5.3 Supervision of Nonlawyer Assistance
0 / 5

Supervising attorneys are professionally responsible for work produced with AI assistance. AI tools are treated as nonlawyer assistance under Rule 5.3, meaning lawyers must have appropriate oversight systems in place for AI-generated work product.


ABA Formal Opinion 512 Generative AI in Legal Practice
0 / 4

ABA Formal Opinion 512 (2024) applies Rules 1.1, 1.6, and 5.3 specifically to generative AI tools. It addresses competence, confidentiality, supervision, candor, and fees in the context of large language models and other generative AI systems used in legal work.

NIST AI RMF AI Risk Management Framework
Core Function — Govern Policies, Roles, and Accountability
0 / 4

The GOVERN function establishes the organizational structures, policies, and culture needed to manage AI risk on an ongoing basis. Without GOVERN, the other three functions have no foundation to operate from.


Core Function — Map Identifying and Categorizing AI Risks
0 / 4

The MAP function requires organizations to identify, categorize, and understand the AI risks specific to their context. In legal environments, this means knowing what tools are in use, what they touch, and where the stakes are highest.


Core Function — Measure Assessing and Monitoring AI Performance
0 / 4

The MEASURE function requires ongoing assessment of whether AI tools are performing within acceptable parameters. In legal practice, this means tracking errors, defining performance standards, and creating mechanisms to surface problems before they become client issues.


Core Function — Manage Responding to and Mitigating AI Risks
0 / 4

The MANAGE function addresses what happens when AI risk materializes. Legal environments require clear response protocols, the ability to quickly contain a failing tool, and a feedback loop that improves policy over time.