The 36% Problem: Why Most Boards Are Failing Their Fiduciary Duty on AI

Category: Sector Intelligence  |  White Paper Alignment: The Board’s Fiduciary Obligation in AI Governance

Published: February 21, 2026  |  Touch Stone Publishers

Sixty-four percent of corporate boards have no formal AI governance framework. That number isn’t a survey curiosity—it’s a liability map. Under Delaware’s Caremark doctrine, directors who fail to implement oversight systems for material risks face personal exposure. Courts have already applied that standard to cybersecurity failures. The extension to artificial intelligence is not a question of if, but when—and the legal scholarship suggests the clock is already running.

The Core Thesis

The majority of corporate boards face material fiduciary exposure because they have not implemented formal AI governance frameworks, despite rapidly accelerating legal obligations, regulatory requirements, and competitive pressures that make AI oversight a board-level imperative.

The executive question is blunt: if your board lacks a formal AI governance framework—and statistically, it probably does—what is your personal liability exposure as a director under Caremark?

I. The Governance Gap Is Measurable and Severe

The data leaves little room for ambiguity. According to the NACD Board Practices Survey, cited in WilmerHale’s January 2026 client alert, only 36% of boards have implemented a formal AI governance framework. Narrow the lens further and the picture worsens: just 6% of boards have established a dedicated AI committee. Meanwhile, only 8% of directors report strong AI expertise among their peers, according to Corporate Board Member’s “What Directors Think” survey for 2026.

The paradox is that awareness isn’t the issue. Forty-four percent of directors want AI elevated to the top of the board agenda. Sixty-two percent now set aside dedicated time for full-board AI discussions—a dramatic increase from prior years, per NACD data reported by Directors & Boards. The will is there. The architecture isn’t.

MIT research published in January 2026 identifies three primary reasons AI governance efforts fail: misaligned goals between technology teams and business leadership, unclear ownership of governance responsibilities, and unchanged workflows that treat AI as an incremental tool rather than a transformative force. These aren’t theoretical obstacles. They’re structural deficits that explain why 64% of boards remain exposed despite broad agreement that the risk is real.

The EqualAI/WilmerHale AI Governance Playbook for Boards, also released in January 2026, outlines four steps every board should take immediately: conduct a comprehensive assessment of AI use and risks across the organization, establish effective oversight structures, implement risk management protocols aligned with recognized frameworks, and empower teams to proactively leverage AI opportunities. As Jessica Lewis of WilmerHale noted, AI governance “has quickly become a legal and strategic imperative.”

Metric Value Source
Boards with formal AI governance framework 36% NACD via WilmerHale, Jan 2026
Boards with dedicated AI committee 6% NACD via WilmerHale, Jan 2026
Directors reporting strong AI expertise 8% Corporate Board Member, Feb 2026
Directors wanting AI at top of board agenda 44% Corporate Board Member, Feb 2026
Directors setting full-board AI discussion time 62% NACD via Directors & Boards, Feb 2026

II. The Legal and Regulatory Pressure Is Accelerating

The legal foundation for board-level AI governance liability is neither speculative nor emerging—it’s grounded in established Delaware corporate law and actively being extended by courts and scholars.

Delaware’s Caremark doctrine, analyzed by Columbia Law School’s Blue Sky Blog in January 2026, requires directors to make a good faith effort to implement and monitor reporting and control systems. The doctrine does not protect, as the analysis makes clear, “a sustained failure to install any system capable of bringing critical risks to the board’s attention.” The Harvard Ethics Safra Center identifies two independent prongs under which boards can be held liable: the “Information System Control” prong (directors failed to implement any reporting system) and the “Red Flags” prong (a system existed but the board consciously ignored its signals).

The stakes intensify under the Delaware Supreme Court’s “mission-critical” standard, established in Marchand v. Barnhill (2019), which demands more rigorous board oversight of operations central to the company’s business. The Harvard analysis concludes that AI-related risks may meet this threshold given their potential for outsized impact—particularly where AI is central to the business, embedded in core operations, deployed in high-risk areas, or involves routine use that creates foreseeable harm. Delaware courts have already applied Caremark to cybersecurity failures in the SolarWinds case, establishing a direct precedent.

Novel fiduciary duties are emerging alongside established doctrine. The Oxford Law Blog identified two new duties in January 2026: “AI due care” and “AI loyalty oversight,” which encompass ethical dimensions including bias, fairness, transparency, and explainability. An SSRN paper published the same month proposes a mandatory fiduciary architecture for board-level AI governance—a jurisdiction-agnostic standard that would convert fiduciary exposure into a concrete governance mandate. Forbes, writing in February 2026, identifies what it calls “the fiduciary vacuum”: the convergence of AI autonomy and eroded protections where no one is answerable for AI-driven decisions.

The regulatory landscape compounds the legal exposure. The EU AI Act’s high-risk obligations take effect in steps during 2026 and 2027. The SEC’s 2026 examination priorities specifically target automated investment tools and AI technologies—displacing cryptocurrency as the top concern. Gartner projects that by 2030, fragmented AI regulation will extend to 75% of the world’s economies. And activist investors, per Governance Intelligence reporting from February 2026, may soon scrutinize board-level AI governance as closely as they do ESG and cybersecurity. By the 2026 proxy season, investors expect boards to demonstrate AI literacy and document director training in their proxy statements.

Legal / Regulatory Development Timeline Source
Caremark doctrine applied to cybersecurity (SolarWinds) Established precedent Harvard Ethics / Safra Center
Novel “AI due care” and “AI loyalty oversight” duties Emerging 2026 Oxford Law Blog, Jan 2026
EU AI Act high-risk obligations 2026–2027 EU AI Act implementation schedule
SEC examination priorities targeting AI 2026 SEC Division of Examinations, Nov 2025
AI regulation covering 75% of global economies By 2030 Gartner, Feb 2026
Investor AI literacy expectations in proxy statements 2026 proxy season Governance Intelligence, Feb 2026

III. The ROI Crisis Makes Inaction Doubly Dangerous

The fiduciary and regulatory arguments would be compelling on their own. But the business case makes inaction doubly dangerous.

PwC’s 2026 CEO Survey, encompassing 4,454 CEOs globally, found that 56% report no revenue gains from AI investments in the past twelve months. Only 12% occupy what PwC calls the “vanguard”—organizations actually seeing returns. Only 22% report using AI to a large or very large extent. The technology is being deployed. The value isn’t being captured.

The productivity-revenue gap tells the deeper story. While 71% of organizations investing $10 million or more in AI report significant productivity gains, according to EY’s December 2025 AI Pulse Survey, only 44% report actual revenue improvements per PwC’s data. Efficiency without strategy doesn’t reach the bottom line. Nearly half—47%—of senior leaders are reinvesting AI productivity gains back into more AI tools, creating a cycle of investment without strategic governance. Only 15% of organizations can prove the financial impact of their AI deployments, according to Cynozure research from February 2026.

The governance-performance link provides the bridge. A Gartner survey of 360 organizations found that those with AI governance platforms are 3.4 times more likely to achieve high effectiveness as measured by Gartner’s composite metric, which includes both operational and strategic outcomes. Correlation doesn’t prove causation, but the statistical significance across a substantial sample points to a structural relationship: governance is not just about compliance. It’s the mechanism through which AI value is captured.

Multiple factors contribute to the AI ROI gap—technology maturity, implementation quality, data readiness, talent availability. But governance provides the board-level lever to systematically address each of these through strategic alignment and resource allocation. AI governance urgency varies by sector: financial services, healthcare, and technology face the most immediate pressure; manufacturing and traditional industries have a longer runway but are not exempt.

The market is responding to this reality. Gartner projects AI governance platform spending will reach $492 million in 2026 and surpass $1 billion by 2030. The warnings from 2025—AI therapists operating without guardrails, private conversations made public, agentic tools wiping entire databases—underscore what happens when AI is deployed without governance.

AI ROI Metric Value Source
CEOs reporting no AI revenue gains 56% PwC CEO Survey, Jan 2026
CEOs in “vanguard” seeing real returns 12% PwC, Jan 2026
Organizations proving AI financial impact 15% Cynozure, Feb 2026
AI governance platforms → high effectiveness 3.4× more likely Gartner (360 orgs)
Projected AI governance platform spend (2026) $492M Gartner, Feb 2026
Projected AI governance platform spend (2030) $1B+ Gartner, Feb 2026

The Counterarguments—And Why They Fall Short

“AI governance is premature—the technology evolves too fast for fixed frameworks.” Caremark doesn’t require perfect frameworks. It requires a good faith effort to implement oversight systems. The absence of any framework is itself the liability exposure. Governance designed to be agile—reviewed quarterly, updated as the technology matures—satisfies the legal standard. Some governance is always better than none.

“This is a management issue, not a board issue.” Columbia Law School’s analysis explicitly places AI governance within the board’s Caremark oversight duty. The board’s role isn’t to manage AI operations. It’s to ensure that robust systems exist for AI to be designed, tested, deployed, and retired responsibly—a distinction the law makes clear.

“Our company doesn’t use high-risk AI.” The EU AI Act’s definition of high-risk is broader than most companies realize, encompassing AI in employment, credit scoring, critical infrastructure, and more. Even companies outside the Act’s scope face fiduciary exposure under Delaware law for any AI deployment that creates foreseeable risk.

“We already have a risk committee that covers this.” Having a risk committee is not the same as having an AI governance framework. Only 36% of boards have formal frameworks despite the majority having risk committees. At minimum, one committee must state plainly that AI forms part of its remit, with defined information flows, executive accountability, and assurance mechanisms.

Governance Implications: What This Means for Your Board

The convergence of legal, regulatory, and competitive forces creates a specific set of exposures that every board should evaluate immediately.

First, directors face personal liability under Caremark’s Information System Control prong if no AI governance system exists. This isn’t theoretical—it’s grounded in active Delaware case law. Second, the competitive disadvantage is quantifiable: organizations with governance platforms produce 3.4 times better AI outcomes, meaning ungoverned boards are leaving measurable value unrealized. Third, regulatory non-compliance risk is accelerating on multiple fronts—the EU AI Act, SEC priorities, and emerging state-level AI regulations create a multi-front compliance challenge.

Beyond legal exposure, AI’s ability to handle 50–60% of entry-level tasks demands board-level governance of human capital strategy. Activist investors are raising AI governance in engagement meetings. D&O insurers are beginning to factor AI governance maturity into coverage assessments, meaning directors on ungoverned boards may face increased personal exposure from their own insurance carriers.

Implementation Pathway: What to Do Monday Morning

These aren’t aspirational recommendations. They’re the minimum viable architecture.

Priority Action Responsible Party
1 Establish or designate a board committee with explicit, documented AI oversight responsibility Board Chair / Lead Director
2 Appoint a C-suite executive accountable for the AI governance framework, integrated with risk, compliance, audit, and data protection CEO
3 Conduct an organization-wide AI inventory and risk assessment CTO / CIO + General Counsel
4 Implement an AI governance dashboard: exposure metrics, control effectiveness, incident tracking CISO / Chief Risk Officer
5 Require regular board education on AI capabilities, risks, and regulatory developments Corporate Secretary
6 Integrate AI governance into existing ERM frameworks—don’t build separate systems Chief Risk Officer

Calibrated Predictions

Prediction Time Horizon Risk Band
First major board liability case related to AI oversight failure 12–24 months HIGH
EU AI Act high-risk obligations fully in effect 2026–2027 HIGH
Investor AI literacy expectations materialize in proxy statements 2026 proxy season MEDIUM
AI regulation extends to 75% of global economies By 2030 MEDIUM
AI governance platform spending surpasses $1B By 2030 LOW

Assumptions

These predictions assume EU AI Act enforcement proceeds on its published schedule, Delaware courts extend Caremark reasoning from cybersecurity to AI oversight following the SolarWinds precedent, the SEC continues targeting automated investment tools as stated in its 2026 examination priorities, activist investors follow through on signaled intent, and AI deployment continues accelerating across industries—increasing the surface area for governance failures.

The Larger Architecture

This analysis forms part of Touch Stone Publishers’ ongoing examination of the board’s fiduciary obligation in AI governance. The 36% governance gap isn’t an isolated finding—it’s a symptom of a broader structural failure in how boards approach technology risk. Subsequent analysis in this series will examine the legal architecture in greater depth, the specific governance models that high-performing boards are adopting, and the economic frameworks that quantify the cost of inaction.

The window for proactive action is narrowing. The first major liability cases are likely within twelve to twenty-four months. The boards that act now—even imperfectly—will be demonstrably better positioned than those that wait for perfect frameworks or assume existing risk committees are sufficient.

Governance isn’t the obstacle to AI innovation. It’s the mechanism through which innovation creates durable value.

Forensic Discovery × Close

Strategic Reality

Select a pillar to review the forensic discovery and economic correction mandate.

Governance Mandate Sovereignty Protocol

Please select an asset to view framework analytics.

Begin Forensic Audit Review Full Executive Leadership Playbook