作者:Nicholas Lieberknecht
July 14, 2025 - Generative artificial intelligence (AI) and the legal profession are rapidly colliding, creating malpractice risk and insurance-coverage gaps.
As both general-purpose tools (ChatGPT/Claude/Gemini) and bespoke AI tools for legal practitioners (CoCounsel/Harvey/Lexis+ AI) become more common, navigating the associated risks is essential. What was an innovative curiosity for early adopters just a few years ago is transitioning to widespread use.
Sign up here.
According to an American Bar Association recent poll (2024 Legal Technology Survey), 45.3% of respondent lawyers predict AI will be mainstream within the next three years, joining the 12.8% of respondents who think AI already is widespread. In other words, the widespread use of AI may already be part of the standard of care for an attorney.
Lawyers have a professional duty to stay current on relevant technologies, including both the benefits and risks of using AI. This article focuses on the risk side of this balancing equation, malpractice and insurance coverage in order to help attorneys unlock these valuable tools.
The legal profession is generating a growing body of cautionary examples involving AI use. Lawyers are integrating AI in drafting discovery requests, interrogatory responses, contracts, and pleadings. Summarizing transcripts and conducting research are other areas in which the use of AI is common. The speed and convenience of AI are undeniable, but so is the potential for undetected errors, missed facts and misapplied law.
AI doesn't cause malpractice; how a lawyer utilizes it does. Here are several examples:
In Mata v. Avianca (S.D.N.Y. 2023), an attorney submitted fabricated case citations using ChatGPT without confirming their existence. The attorney's reliance breached the duty of competence, and the court applied existing ethical rules to sanction the attorney.
In response to examples like this one, many courts have created rules that require the disclosure of the use of generative AI in filings. Likewise, many law firms have policies in place that require a junior attorney to disclose the use of AI to the partner or supervising attorney.
Uploading confidential client data into a public AI tool without understanding data retention policies may violate Rule 1.6 or equivalent rules for protecting confidentiality in other jurisdictions. Such use may result in the retention of data that is shared with others or used for future training of models or violate privacy laws.
Additionally, California's rules interpret the duty of technological competence to include knowledge of how data is stored and processed. This is a risk that may be mitigated with AI tools designed for legal professionals that retain confidential information locally.
Professional rules require the supervision of a subordinate's work. This principle equally applies to the use of technology such as AI tools. While delegation to a nonlawyer is permissible, the responsibility for attorney oversight persists.
Client-facing bots that provide legal advice without intervening attorney oversight could constitute the unauthorized practice of law and lead to claims.
To help avoid common pitfalls involving the use of AI, here are some suggested practices to manage risks:
•Internal policies: Define which tools are approved and for what tasks, regardless of confidentiality.
•Verify: Confirm the veracity of AI-assisted outputs.
•Protect confidentiality: Use tools with clear data-governance policies and follow client service-level agreements.
•Educate and document: Train those you are supervising and document when and how AI was used in work product.
•Disclose if needed: Consider disclosing AI use in filings and client work product where its role is material, and be sure to follow court rules that require disclosure.
As generative AI tools increasingly are part of legal practice, attorneys should be mindful of potential insurance-coverage implications.
Lawyers' professional-liability (LPL) insurance covering claims of alleged negligence typically do not exclude AI use. But coverage may depend on whether the conduct at issue meets the LPL's definition of "professional service."
An attorney submitting AI-generated work product that an attorney has not reviewed could face the loss of coverage if an insurer succeeds in arguing that this activity does not rise to the level of legal work.
An AI-related claim could implicate various exclusions common to LPL policies. For example, an attorney who deliberately fails to verify fake citations generated by AI could risk triggering a policy's intentional-acts exclusion.
Similarly, a contractual-liability exclusion could exclude coverage arising from an attorney's breach of a service level agreement in the absence of a covered tort claim. Additionally, some LPL policies exclude technology-related failures.
Using client data in an AI tool could qualify as a breach of confidentiality. If the client data happens to include personally identifiable information and/or protected health information, this could also qualify as a data breach under various state breach notification laws. Such an incident may trigger cyber coverage, but also could be subject to various coverage exclusions or limitations, including the intentional-acts exclusion.
With coverage under an LPL policy for AI risks neither explicitly included or excluded, novel coverage situations could arise. This so-called "Silent AI" risk means that insurers may be unintentionally covering or not covering AI-related activity that was never contemplated in the underwriting process. These situations may result in a lack of coverage when it's most needed or inadvertent exposure for insurers, leading to costly coverage disputes.
Insurers are beginning to introduce exclusions related to AI use in certain lines of coverage. Attorneys should review these and work with brokers to navigate gaps in coverage, which may lead to stand-alone policies affirmatively providing coverage for AI-related risks.
In this rapidly evolving area of legal practice, managing the risk of AI is key to unlocking its benefits. Understanding the coverage implications and proactively closing coverage gaps is essential for protecting your legal practice.
Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.
Atheria Law partner Nicholas Lieberknecht counsels insurers in connection with professional liability policies issued to design professionals. He is based in San Franscico and can be reached at Nicholas.Lieberknecht@atherialaw.com.