Mastermind Group: Designing the Human-AI Accountability Contract
Level 1: The Macro-Trend — The Crisis of the Unassigned Task
Welcome to the working session. Today, we are not here to discuss the potential of AI. We are here to solve a structural crisis that is already unfolding in your organizations: the crisis of the unassigned task. Every time an autonomous AI system is deployed, it creates a new set of implicit tasks that are critical for its safe and effective operation, yet these tasks are rarely assigned to a specific owner. Who is responsible for monitoring the AI’s performance drift? Who is accountable for testing its outputs for bias? Who has the authority to intervene when it begins to operate outside its intended parameters? In most companies, the answer is no one. These are unassigned tasks, and they represent a systemic failure of governance that is creating unmanaged risk.
This is not a problem of technology; it is a problem of organizational design. We have become so focused on the capabilities of the AI itself that we have neglected to design the human systems that must surround it. We are deploying powerful, autonomous agents into organizational structures that were built for a world of human-only decision-making. The result is a dangerous ambiguity. When a task has no owner, it does not get done. When the task is monitoring a high-stakes AI system, the consequences of that neglect can be catastrophic.
Consider the data. 95% of enterprise AI projects are failing to deliver value [1]. This is not because the algorithms are flawed, but because the organizational context is not prepared for them. The projects that succeed are the ones that have solved for ownership before they have solved for scale. They have recognized that deploying an AI is not just a technical implementation; it is the creation of a new, hybrid human-AI team. And like any team, it requires clear roles, responsibilities, and lines of accountability to function.
This is the challenge we are here to solve today. We are not going to talk about AI in the abstract. We are going to get practical. We are going to deconstruct the failures of others, we are going to identify the unassigned tasks in your own organizations, and we are going to design a concrete, actionable accountability contract that you can implement tomorrow. The goal of this session is not to produce a report; it is to produce a plan. A plan that assigns an owner to every critical task in the AI lifecycle, from design to deployment to decommissioning. A plan that closes the accountability gap before it becomes a crisis.
Level 2: The Pressure Test — A Workshop on Assigning Ownership
Alright, let’s move from the what to the how. This is the workshop portion of our session. We are going to break down this problem into its component parts and, for each part, we will ask a simple question: who owns this in your organization right now? Be honest. If the answer is "I don't know" or "it's shared," that is the starting point for our work.
Forensic Data Analysis: The Unassigned Tasks in Your P&L
Let's start with the numbers. I want you to think about your own AI initiatives through the lens of these two statistics:
-
The 95% Failure Rate: MIT's 2025 study is a brutal diagnostic tool [1]. If 95% of projects are failing due to a lack of clear ownership and misaligned goals, let's apply that to your own portfolio. Take the total capital you have invested in AI pilots over the last 24 months. Multiply it by 0.95. That number is the value of the unassigned task of strategic alignment and ownership. It is the cost of not having a single, accountable owner for each initiative from day one. Workshop Question 1: Who in your organization is accountable for the ROI of your AI portfolio as a whole? Not the budget, the return.
-
The 3.4x Governance Multiplier: Gartner found that companies with dedicated AI governance platforms are 3.4 times more likely to achieve high effectiveness in their AI initiatives [2]. This is not about buying more software. It is about the organizational commitment that precedes the purchase. A governance platform only works if the roles, responsibilities, and rules have been defined first. The platform is the tool; the accountability architecture is the work. Workshop Question 2: Who owns the definition of your AI governance policies? And who owns their enforcement at runtime?
Case Study Deconstruction: The Two Questions Every Board Should Be Asking
Now let's move to case studies. These are not just stories; they are free lessons in what not to do, paid for by other companies' shareholders.
Case 1: Deloitte's AI-Generated Fabrications [3]
- The Failure: An AI tool produced a government report with fake legal citations. The human review process, designed to catch human error, missed it entirely.
- The Unassigned Task: The task of adversarial validation. This is not proofreading. It is actively trying to break the AI's output. It is asking: "How could this be wrong in a way I don't expect?" It is a red-teaming function that is almost always unassigned.
- Workshop Question 3: Who in your organization is responsible for the adversarial validation of your AI's outputs before they go to a client or the public?
Case 2: Boeing's MCAS Disaster [4]
- The Failure: An automated system made a fatal decision based on a single faulty sensor, and there was no effective human override or direct accountability line to the board.
- The Unassigned Task: The task of owning the kill switch. In every autonomous system, there must be a clear, unambiguous, and tested process for a human to take control. This authority cannot be buried in a sub-committee; it must be clear and immediate.
- Workshop Question 4: For your most critical AI system, who is the named individual with the authority to take it offline in a crisis, and have you ever tested that process?
Escalation & Market Response: The Contract You Signed Without Reading
Finally, let's talk about the contracts you have already signed with your AI vendors. I can almost guarantee you have agreed to two things: a strict limitation of the vendor's liability and a broad indemnification clause that makes you responsible for defending them in court [5]. You have contractually accepted the risk.
This creates an urgent, unassigned task: the task of risk acceptance. Someone in your organization made a decision to accept the risk transferred to you by that contract. But was it an explicit decision? Was the risk quantified? Was it approved at the right level?
Workshop Question 5: Who is the executive owner responsible for reviewing and formally accepting the residual liability from our AI vendor contracts?
This is the work. Answering these five questions is the first step to designing an accountability architecture that works. It is about taking the implicit, unassigned tasks that are currently floating in your organization and assigning them to a named, accountable owner. That is what we will codify in the next section.
Level 3: The Codification — The Human-AI Accountability Contract
We have identified the unassigned tasks. Now, we codify the solution. The output of this session is a clear, actionable accountability contract. This is not a legal document; it is an operating document. It is a simple, one-page charter that defines the roles and responsibilities for governing your autonomous AI systems. It is the answer to the question: who owns what?
The Retired Classic Principle: The Idea of the "Super-User"
For years, we have relied on the concept of the "super-user" or the "business owner" to bridge the gap between technology and the business. This was the person who knew the system best and could translate business needs into technical requirements. This model is no longer sufficient. An AI is not just a system to be used; it is an agent that acts. The owner is not just a user; they are accountable for the agent’s behavior. The super-user model is a passive, reactive stance. We need an active, proactive model of ownership.
The New Touch Stone Law: The Law of Designed Accountability
This law is the foundation of our accountability contract. It mandates that we treat ownership as a design choice. Here is the contract, built around the five unassigned tasks we identified. Your homework is to take this back to your teams and fill in the names.
The Human-AI Accountability Contract
| Task | Role / Owner | Key Responsibilities | Metric of Success |
|---|---|---|---|
| 1. Strategic Alignment & ROI | AI Portfolio Owner (Executive Sponsor) | - Accountable for the overall business case and ROI of the AI portfolio. - Ensures alignment between AI initiatives and corporate strategy. - Reports to the board on AI performance and risk. |
- % of AI projects that meet or exceed their stated business case. - Total portfolio ROI. |
| 2. Governance Policy & Enforcement | AI Governance Lead (or Council) | - Defines and maintains the organization’s AI policies and standards. - Owns the AI governance platform and its configuration. - Responsible for auditing compliance with AI regulations (e.g., EU AI Act). |
- Time to audit and approve a new AI model. - # of policy violations detected and remediated per quarter. |
| 3. Adversarial Validation | AI Red Team Lead | - Responsible for the independent, adversarial testing of all AI models before deployment. - Designs and executes tests to identify novel failure modes (e.g., bias, hallucination, security vulnerabilities). - Reports findings directly to the AI Portfolio Owner. |
- # of critical vulnerabilities identified pre-deployment. - % of models that pass red team review on the first attempt. |
| 4. The Kill Switch | Human Supervisor (Named individual per system) | - The named individual with the authority and technical ability to override or shut down a specific AI system in a crisis. - Responsible for executing the emergency shutdown procedure. - Accountable for the decision to intervene (or not intervene). |
- Time from crisis identification to successful system override (tested quarterly). - Zero unauthorized overrides. |
| 5. Vendor Risk Acceptance | Chief Risk Officer (or equivalent) | - Owns the process for reviewing and formally accepting the residual risk from AI vendor contracts. - Quantifies the potential financial and legal exposure from liability caps and indemnification clauses. - Approves all high-risk AI vendor contracts. |
- % of AI vendor contracts with a completed and approved risk assessment. - Total quantified risk exposure from vendor contracts. |
This is your plan. It is a starting point. It takes the five most critical, and most often unassigned, tasks in the AI lifecycle and gives them an owner. It is the first step to moving from a state of implicit, ambiguous accountability to one of explicit, designed ownership. It is how you close the accountability gap. Your work starts now.
References
[1] MIT. (2025, August 18). 95% of Enterprise AI Pilots Are Failing. Fortune. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/
[2] Gartner. (2026, February 17). Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms [Press Release]. https://www.gartner.com/en/newsroom/press-releases/2026-02-17-gartner-global-ai-regulations-fuel-billion-dollar-market-for-ai-governance-platforms
[3] Rojas Merino, M. E. (2025, October 10). The Deloitte AI Failure: A Wake-Up Call for Operational Risk. Pirani Risk. https://www.piranirisk.com/blog/the-deloitte-ai-failure-a-wake-up-call-for-operational-risk
[4] Larcker, D. F., & Tayan, B. (2024, June 6). Boeing 737 MAX. Harvard Law School Forum on Corporate Governance. https://corpgov.law.harvard.edu/2024/06/06/boeing-737-max/
[5] Loring, J. M., & Sevener, L. (2025, September 15). AI Vendor Liability Squeeze: Courts Expand Accountability While Contracts Shift Risk. Jones Walker LLP. https://www.joneswalker.com/en/insights/blogs/ai-law-blog/ai-vendor-liability-squeeze-courts-expand-accountability-while-contracts-shift-r.html