On April 14, 2026, KPMG International and the INSEAD Corporate Governance Centre released the first globally coordinated framework for board-level AI oversight. The document does not read like a suggestion. It reads like a standard of care. Boards that cannot demonstrate structured, documented, and repeatable AI governance now face a compounding set of risks: regulatory exposure, shareholder scrutiny, and liability in courts that are developing precedent around fiduciary duty in the age of artificial intelligence.
The framework arrives at a decisive moment. The KPMG Global AI Pulse Survey, published alongside the principles, found that nearly three-quarters of boards possess only moderate or limited AI expertise. That gap, left unaddressed, is not merely a performance problem. It is a governance deficiency that regulators, plaintiffs’ counsel, and institutional investors are beginning to treat as such.
The Fiduciary Stakes Have Shifted
Directors have always been required to exercise informed oversight over material business risks. For decades, that obligation extended primarily to financial controls, compliance programs, and operational continuity. Artificial intelligence has expanded the perimeter of that duty in ways most boards have not yet fully absorbed.
WilmerHale’s analysis of board AI obligations, published in January 2026, identifies four structural requirements for directors: comprehensive assessment of AI use and associated risks across the entire organization; establishment of oversight structures with clear accountability mechanisms; implementation of risk management protocols aligned with recognized frameworks; and policies that enable responsible innovation without abandoning safeguards. Failure on any of these fronts creates potential liability exposure as courts begin adjudicating matters involving AI risk under Delaware fiduciary standards.
The Harvard Law School Forum on Corporate Governance identified AI governance as the defining board priority of 2026, alongside data governance, cybersecurity, and sustainability reporting. What distinguishes AI governance from prior technology cycles is the speed at which autonomous systems can make consequential decisions without human review. Directors who rely solely on management assurances without establishing independent oversight mechanisms are not meeting the standard the law is developing.
The Five Principles Boards Must Operationalize
The KPMG-INSEAD framework organizes board accountability across five distinct domains. Each domain carries a specific set of questions directors must be prepared to answer with documented evidence rather than management assurances.
The first principle addresses strategic oversight for long-term value creation. Boards must engage directly with how AI shapes the organization’s competitive position, risk profile, and long-term sustainability. This is not a function that can be delegated entirely to the technology committee or the CIO. The board as a whole bears responsibility for understanding how AI initiatives align with approved strategy and what scenarios could produce adverse outcomes at scale.
The second principle governs active technology and security oversight. AI systems introduce attack surfaces and dependencies that traditional cybersecurity frameworks did not anticipate. Directors must understand not only what AI tools the organization uses, but where those tools source their data, who controls the underlying models, and what the organization’s exposure would be if a vendor relationship failed or a model was compromised.
The third principle centers on workforce transformation and human accountability. AI deployment at scale changes the nature of human work within organizations. Boards have a governance responsibility to monitor how productivity gains are distributed, how skills degradation is being managed, and whether the organization can answer clearly who bears accountability when an AI system produces an adverse outcome. The KPMG-INSEAD framework is explicit on this point: governance structures must preserve human judgment in decision chains that carry material risk.
The fourth principle requires building trustworthy AI. This means establishing standards that reflect both the organization’s stated values and its regulatory obligations. In the United States alone, sector-specific AI regulations are proliferating in employment, financial services, and healthcare. Boards must ensure that compliance programs extend to AI systems and that there is a documented process for identifying when deployed AI intersects with regulated decisions.
The fifth principle addresses the work of the board itself. The framework is direct: board oversight processes must be adapted for AI. That adaptation includes director education, committee responsibility mapping, and regular reporting protocols that bring AI performance and risk data to the full board in a form that enables informed judgment.
The Expertise Gap Is a Governance Problem
The finding that nearly three-quarters of boards have only moderate or limited AI expertise is not a credential gap. It is a structural governance gap. Boards that lack sufficient AI literacy cannot ask the right questions of management, cannot evaluate the adequacy of risk frameworks being presented to them, and cannot determine whether assurances about AI safety and compliance are substantively correct or simply well-packaged.
The KPMG-INSEAD framework identifies board composition and director education as governance priorities, not aspirational goals. Institutional investors and proxy advisors are beginning to evaluate board AI competency as a material factor in governance ratings. The SEC’s current enforcement posture, which emphasizes individual accountability over corporate settlements, creates additional incentive for directors to demonstrate personally that they took their AI oversight responsibilities seriously.
Organizations that have already moved to address the expertise gap share several structural characteristics. They have established standing AI oversight mechanisms at the board level, whether a dedicated committee or a formal expansion of the audit or risk committee’s mandate. They require regular reporting from management that includes documented AI inventories, risk classifications, and third-party due diligence findings. They have invested in director education programs that provide working familiarity with AI concepts, not merely high-level awareness.
What the Regulatory Horizon Requires
The regulatory trajectory in 2026 is moving from principles to enforcement. The EU AI Act is in effect and imposing documented compliance obligations on organizations operating in European markets. United States federal agencies are publishing sector-specific guidance that operationalizes AI risk management requirements for financial institutions, federal contractors, and healthcare providers. State-level AI legislation is accelerating, with multiple jurisdictions enacting requirements around automated decision-making in employment and consumer transactions.
The consistent finding across regulatory frameworks is this: the expectation is not that organizations will achieve perfect AI governance immediately, but that they will demonstrate a documented, structured, and continuously improving governance process. Regulators are looking for evidence that boards took the obligation seriously, assigned accountability clearly, and created mechanisms for identifying and remediating problems.
Boards that can produce that evidence occupy a fundamentally different risk position than those that cannot. The organizations best positioned in this environment are those that treated AI governance as a board-level discipline before regulators required it.
The Board Imperative
The release of the KPMG-INSEAD AI Board Governance Principles establishes a global reference point for what informed directors are expected to know and do. The principles represent the convergence of regulatory expectation, institutional investor scrutiny, and developing legal standards around fiduciary duty.
Boards that have not yet established formal AI oversight structures should treat this framework as the baseline for immediate action. The first step is a structured assessment: what AI systems does the organization operate, what decisions do those systems influence, and what accountability mechanisms currently exist. The second step is accountability mapping: which committee owns AI risk oversight, what reporting does the board receive, and what criteria determine when AI risk escalates to full board attention.
The directors who will be in the strongest position when regulators, investors, or courts examine AI governance will be those who can show that they asked the right questions, established the right structures, and required the right answers before any problem required them to. The KPMG-INSEAD framework tells boards precisely what those questions are. The obligation to answer them is already in effect.