Agentic AI systems — those that do not merely recommend but act — are now in production at Fortune 500 companies. They are negotiating vendor terms, executing trades, adjusting pricing, and making hiring decisions. The legal and governance community has been slow to catch up. But the question that corporate boards can no longer defer is now squarely on the table: when an AI agent causes harm, who is liable?

The answer, under every current legal framework, is the same: the humans responsible for deploying, overseeing, and governing that system. And in the corporate context, that means the board.

The Three Deployment Categories That Define Liability Exposure

Not all AI creates equivalent board exposure. Governance frameworks are increasingly distinguishing between three deployment categories, each with materially different fiduciary implications.

Assistive AI provides analysis, surfaces insights, and generates recommendations — but leaves decision authority entirely with humans. Board exposure here is limited to the accuracy of the inputs and the quality of the oversight structure. This is where most enterprise AI was deployed two years ago.

Agentic AI narrows options, prioritizes risks, generates recommendations, and in many implementations, triggers pre-authorized actions without human review of each transaction. This is where most leading enterprises are deploying today. The fiduciary exposure is substantially higher because the human is ratifying a decision framework, not reviewing individual decisions.

Autonomous AI operates without meaningful human involvement in individual decisions. No major corporate board should be authorizing autonomous AI deployment in consequential domains without explicit governance architecture — and yet deployment is outpacing governance in precisely this category.

The critical governance insight: most boards are approving AI strategies without specifying which deployment category applies to which use case. That omission is itself a governance failure.

How Fiduciary Duty Has Evolved in the Age of AI

The duty of care has always required directors to inform themselves of material risks. The duty of loyalty has always required directors to act in good faith to establish reporting systems for those risks. Neither duty contains an exception for technological complexity.

What has changed is the specificity of what “informed” now requires. The EU AI Act has effectively established a global governance standard that is migrating into judicial interpretation of director obligations in non-EU jurisdictions. Two new fiduciary concepts are emerging in legal scholarship and early case law: AI due care — the obligation to develop technologically literate oversight of algorithmic systems — and AI loyalty oversight — the obligation to ensure AI systems are governed in alignment with stakeholder interests, not merely management convenience.

WilmerHale’s 2026 governance outlook is explicit: the era of passive awareness is over. Directors who cannot demonstrate active, structured AI oversight are not meeting the duty of care standard as it is now being interpreted.

The Indemnification Gap That Most Boards Have Not Assessed

D&O insurance policies written before 2024 were not designed with agentic AI liability in mind. As AI governance scholars and insurance professionals have begun to document, there is a material gap between the AI-related liabilities corporate directors are now assuming and the D&O coverage structures designed to protect them.

The gap is specific: policies that were written to cover errors in human judgment may not cover claims that a board failed to establish adequate AI oversight — particularly where the failure is characterized as a systemic governance deficiency rather than an individual decision error. Boards that have not reviewed their D&O coverage in light of AI deployment are carrying uncovered fiduciary exposure they may not know exists.

This is not a theoretical concern. The first D&O claims directly citing AI governance failures are already in the litigation pipeline. Boards that act now have the opportunity to structure their governance architecture before those claims establish adverse precedent.

What Defensible AI Governance Looks Like in Practice

Four governance structures separate boards with defensible AI oversight from those without it.

First, a formal AI oversight mandate — explicitly chartered, not inferred from general risk committee authority. Second, a structured AI risk report presented at every board meeting against defined metrics, not at management’s discretion. Third, an AI deployment inventory — a documented register of all AI systems in production, their deployment category (assistive, agentic, or autonomous), and their access to material data. Fourth, documented director competency — formal training records that demonstrate each director has the baseline AI literacy to exercise informed oversight.

Boards that have all four structures in place can demonstrate a defensible governance posture. Boards that cannot produce documentation for any one of these four elements are operating with material governance gaps that regulators, investors, and plaintiffs’ counsel will eventually identify.

The Board’s Non-Delegable Obligation

AI can inform and amplify board judgment. It cannot hold fiduciary duty. The fundamental principle that emerges from every legal framework, governance standard, and regulatory guidance document released in the past 18 months is consistent: fiduciary responsibility remains firmly and non-delegably human. Directors who deploy AI governance by delegating AI oversight to management committees — without establishing board-level accountability structures — have not governed AI. They have abdicated governance of it. The liability exposure that follows belongs to the board, not the committee.

Forensic Discovery × Close

Strategic Reality

Select a pillar to review the forensic discovery and economic correction mandate.

Governance Mandate Sovereignty Protocol

Please select an asset to view framework analytics.

Begin Forensic Audit Review Full Executive Leadership Playbook