DECISION RIGHTS IN THE AGE OF AGENTIC AI

Redefining Authority When Algorithms Act on Their Own

A White Paper by Touch Stone Publishers


THE TECTONIC SHIFT: WHEN ALGORITHMS BECOME ACTORS

The enterprise is undergoing a structural transformation that few executives fully comprehend. For the first time, artificial intelligence is not merely analyzing data or recommending actions. It is making decisions. It is executing them. It is governing itself toward self-defined objectives without human intervention at each step.

This marks the transition from AI as analytical tool to AI as autonomous actor. These systems now establish their own goals, allocate resources, engage with customers and partners, and execute complex workflows—all within parameters set at inception. This is not speculative technology. It is operational reality today. McKinsey's 2025 agentic research places the potential annual economic value unlocked by these systems at $2.6 to $4.4 trillion across enterprise applications.[^1][^2] The magnitude is staggering. The governance implications are catastrophic.

Yet here lies the paradox that defines the present moment: boundless potential paired with systemic adoption failure.


THE PARADOX: WHY THE BEST TECHNOLOGY FAILS

The statistics are clinical in their severity. A staggering 95% of enterprise AI pilots never progress beyond the experimental phase.[^3] Of the companies that do move forward, 74% fail to extract meaningful value after two years of effort.[^4] Only 1% of organizations report mature AI adoption across their operations.[^5] These are not implementation delays. These are civilizational governance gaps.

The bottleneck is not technical. It is structural. The root cause lies not in the capabilities of the algorithms but in the obsolescence of the decision architecture that governs them.

Two-thirds of board members admit to having limited or no knowledge of artificial intelligence.[^6] This means that the very leaders accountable for enterprise risk are operating without foundational understanding of the systems now making autonomous decisions within their firms. They have delegated authority without understanding what they have delegated, to what they have delegated it, or how to govern the outcome.

This cascades into operational chaos. Seventy percent of the workforce will require retraining within three years due to AI displacement and transformation.[^7] The organization is being rewritten in real time, yet the decision rights that govern that rewriting remain rooted in 20th-century hierarchical approval models. Speed collides with governance. Risk multiplies. And executives discover too late that they have ceded control without establishing authority.


THE GOVERNANCE CRISIS: WHERE REGULATORY AND OPERATIONAL IMPERATIVES COLLIDE

The failure to redesign decision rights has become both a compliance crisis and an operational one.

Regulators are moving first. The European Union's AI Act codifies mandatory human oversight for high-risk systems, making governance a legal requirement, not merely a best practice.[^8] Systems that make consequential decisions about employment, credit, criminal justice, or safety now require documented human review and intervention capability. Non-compliance carries severe penalties. But compliance without clarity about decision rights creates something worse than chaos: it creates liability.

Simultaneously, the security landscape has shifted. The OWASP Top 10 for Agentic Applications identifies risks that were unimaginable in the traditional software security model: goal hijacking (where an agent pursues its delegated objective in unintended ways), tool misuse (where delegated capabilities are weaponized against the organization), and identity and privilege abuse (where an agent's delegated authority becomes a vector for attack).[^9] These are not edge cases. They are systemic vulnerabilities that emerge the moment you grant autonomous decision-making authority to a system without explicit architectural safeguards.

The operational reality is more brutal. A risk expert frames it with precision: when speed turns into chaos, the system collapses.[^10] And that is exactly what happens when an organization grants decision-making authority to agentic systems without first clarifying what decisions they are authorized to make, under what conditions they must escalate to human judgment, and how their actions will be audited and held accountable.

Traditional governance—built on hierarchical layers of human approval—is not merely slow. It is incompatible with the operational model that agentic AI demands. A human reviewing every decision made by an autonomous system is not oversight; it is a bottleneck that defeats the entire value proposition of automation. The model is broken. And the organizations that recognize this earliest will extract disproportionate competitive advantage.


THE CASE FOR DELEGATED AUTHORITY: WHAT SUCCESS LOOKS LIKE

Consider JPMorgan Chase, where the consequences of this transition have been operationalized into a blueprint for others.

JPMorgan did not attempt to maintain human approval over every AI decision. Instead, they extracted $1.5 billion in annual value from their AI initiatives by constructing a sophisticated three-layer governance model: Technical Controls (restricting what the system can technically do), Process Controls (defining the business rules and conditions under which it operates), and Human Oversight (establishing thresholds above which decisions escalate to human judgment).[^11] This is not a compliance framework bolted onto a technology implementation. It is a fundamental redesign of how authority is delegated and how accountability is maintained.

Their Human-AI Collaboration Framework makes this concrete. JPMorgan defines explicit Decision Thresholds for every AI application. Routine document review can be handled entirely by the AI—millions of contracts processed, legal clauses extracted, and simple language verified without human intervention. But the moment the system encounters ambiguity, novel contract structures, or clauses that exceed a defined complexity threshold, the decision escalates to a human attorney. The value flows from the system's ability to handle routine decisions at machine speed. The risk is contained by human authority over decisions that matter.

This stands in stark and instructive contrast to IBM's Watson for Oncology initiative at MD Anderson Cancer Center. Despite years of investment and genuine technological capability, the project yielded minimal clinical impact and was ultimately abandoned.[^12] The failure was not algorithmic. Watson's recommendations were clinically sound. The failure was organizational and structural: the system was never meaningfully integrated into physician workflows, it did not earn the trust of frontline clinicians, and the organization never clearly defined what role the AI would play in actual medical decision-making. Technology in search of a problem. Authority without integration. The tissue rejection was inevitable.

The contrast is stark. JPMorgan succeeded not by using superior technology but by establishing clear decision rights and building them into the operational fabric of the organization. Watson failed not because the algorithm was weak but because decision rights were ambiguous and the system was never granted the institutional authority to function.


THE MARKET'S RESPONSE: HOW SUCCESSFUL ENTERPRISES ARE RESTRUCTURING

The market has begun to correct for this governance gap, but not in the way most technology leaders expected.

Gartner forecasts a billion-dollar market for AI governance platforms, as organizations scramble to regain control and institute compliance.[^13] This is healthy. But technology alone is not the answer. The most successful organizations are discovering that the path to AI adoption runs through culture and trust, not more sophisticated controls.

H&M's experience provides crucial insight. When introducing AI for inventory optimization and design automation, the company deliberately chose its language with precision. They framed the initiative as "amplified intelligence," not "artificial intelligence." This was not marketing. This was structural: it meant employees retained decision-making authority and were trained to use AI as an instrument for exploration and optimization, not as a replacement for their judgment. Resistance dissolved. Adoption accelerated. The system succeeded because decision rights were clear: humans decided, AI amplified, humans owned the outcome.

Morgan Stanley faced a different challenge. Financial advisers using the bank's new GPT-4-powered assistant carried a reasonable skepticism. Would the system hallucinate? Would it provide unreliable information? Would it damage client relationships? The bank's response was architecturally elegant: they constructed a "walled garden." The AI system was restricted to drawing exclusively from verified internal research, eliminating the risk of false information while preserving the speed advantage of machine-assisted research. By tightly defining the scope of the system's authority—decision rights bounded to a verified knowledge base—they built trust. What began as a tentative pilot became a daily necessity for the advisory team.

Both examples reveal the same truth: the value of AI is not unlocked by granting it autonomy. It is unlocked by the quality of the human oversight that governs it, combined with clear definition of what decisions the system is authorized to make.


THE NEW LAW: RETIRING THE OLD PRINCIPLE

The evidence demands a structural shift.

The classic principle of hierarchical approval—where every significant decision requires multiple layers of human sign-off—is hereby retired. It served the industrial organization well. It is incompatible with the agentic era. It is too slow, too costly, and fundamentally misaligned with the speed and scale at which autonomous systems operate. It is a relic of a bygone governance model.

In its place, Touch Stone enacts a new law:

The Law of Delegated Authority: When you delegate a task, you delegate the responsibility for its execution, but you retain the accountability for its outcome.

This is not a minor semantic distinction. It is a civilizational shift in how authority flows through the organization.

Under hierarchical approval, the executive who signs off on a decision shares accountability for the decision itself. Authority and accountability are fused at every layer. This is why hierarchical approval is so slow—every layer of the organization must review, validate, and endorse every consequential decision. It is also why it is so costly—it requires armies of reviewers, approvers, and gatekeepers.

The Law of Delegated Authority severs this fusion. When you delegate a task to an autonomous system, you transfer responsibility for executing that task to the system. The system is accountable for doing the task correctly, efficiently, and within the bounds of its decision thresholds. But you, the leader, retain ultimate accountability for the outcome. You do not retain accountability for how the decision was made—you retain accountability for whether the decision should have been made at all.

This requires a fundamental operational shift. It means moving from being human-in-the-loop for every decision to being human-on-the-loop for the system as a whole. The difference is profound. Human-in-the-loop means you are injected into every process, every decision point, every exception. You are a bottleneck. Human-on-the-loop means you establish the system's constraints, monitor its behavior, escalate when it violates those constraints, and hold it accountable for outcomes. You are a governor, not a gatekeeper.


OPERATIONALIZING THE LAW: THE THREE ARCHITECTURAL REQUIREMENTS

The Law of Delegated Authority is not aspirational. It is operational. It requires three architectural elements.

First: Explicit Decision Thresholds. Every agentic system must operate within a defined boundary. What decisions can it make autonomously? Under what conditions must it escalate to human judgment? What is the maximum financial, operational, or reputational exposure for a single autonomous decision? These are not nice-to-have governance questions. They are the foundational requirements for any delegated authority. JPMorgan's three-layer model is not elegant because it is complicated; it is elegant because it is explicit. Every system knows what it is authorized to do.

Second: Runtime Controls and Audit Trails. Delegated authority without audit is delegated irresponsibility. Every autonomous decision must be logged, traceable, and reviewable. If a system makes a decision that generates unexpected outcomes, you must be able to reconstruct the decision-making process, understand what parameters drove it, and determine whether the system acted within its authority or violated its thresholds. This is not surveillance; it is accountability. Without it, delegation becomes opacity.

Third: Radical Accountability for Outcome. This is where the Law of Delegated Authority becomes a cultural principle, not merely an operational one. As the leader who delegates authority, you own the outcome. You cannot claim the decision was "made by the algorithm." You cannot hide behind technical complexity. You cannot defer accountability to the implementation team. When a delegated system produces a poor outcome, the question is not "What went wrong with the AI?" The question is "What did you authorize the AI to do, and why did you authorize it?" This is uncomfortable. It is also essential.

As Michael Schrage and David Kiron frame it in MIT Sloan Management Review, this represents a "Great Power Shift" where intelligent choice architectures actively rewrite the decision rights of the organization.[^14] It is not that AI has become more powerful. It is that leaders are finally designing the decision architecture to match the capabilities of the technology rather than forcing the technology to fit into obsolete decision models.


THE CULTURAL REQUIREMENT: FROM INSTINCT TO ARCHITECTURE

Ultimately, the adoption of agentic AI is not a technology challenge. It is a leadership challenge.

Satya Nadella, CEO of Microsoft, stated it with clarity: "What's required is the hard work of culturally changing how they adopt technology."[^15] Technology adoption at enterprise scale succeeds or fails based on organizational readiness, trust, and alignment—not on the sophistication of the algorithms.

The shift required is one of leadership philosophy. Traditional leadership, particularly in the industrial era, was built on executive instinct—the belief that senior leaders, through experience and judgment, could make superior decisions. The C-suite gathered information, debated options, made calls, and expected execution. This model survives today in most organizations, even as the complexity of the decisions has exploded beyond the capacity of any individual mind to comprehend.

Agentic AI does not eliminate the need for executive judgment. It transforms it. The leader's role evolves from making individual decisions to building decision architecture. Instead of deciding which customer to prioritize, the executive designs the thresholds and constraints that will guide the agentic system in making thousands of customer prioritization decisions daily. Instead of approving a specific marketing campaign, the leader establishes the principles, budget constraints, and brand guidelines that will shape the AI's campaign generation and deployment.

This requires a different form of intelligence. It is not the intelligence of the instinctively correct decision. It is the intelligence of the well-designed system. It is the capacity to think in constraints, thresholds, feedback loops, and audit trails. It is the ability to build trust through transparency and accountability, not through control and approval.

As organizations move toward agentic systems, they are moving toward a model where human judgment and machine intelligence are fundamentally interdependent. The machine gets faster, smarter, more capable. But the humans who govern it must simultaneously grow in their capacity to think systemically, to design intelligently, and to hold themselves accountable for the outcomes of systems they did not directly control.

The question is no longer whether algorithms will act autonomously. They will. The question is whether your organization has the leadership capacity and the institutional architecture to govern them effectively.


THE IMPERATIVE: GOVERNING THE AUTONOMOUS ENTERPRISE

This moment is a threshold. On one side of it, organizations continue operating under hierarchical approval models that are fundamentally incompatible with agentic systems. They attempt to maintain human review of decisions that are too numerous, too fast, and too complex for human review. They cede control while maintaining the illusion of control. They generate compliance theater without genuine governance.

On the other side of this threshold lies the Intelligent Enterprise—an organization that has redesigned its decision architecture to match the capabilities of its technology, that has explicitly delegated authority to autonomous systems within clearly defined boundaries, and that has built the cultural capacity to hold itself accountable for the outcomes of those systems.

The gap between these two states is not a technical gap. It is a governance gap. It is a leadership gap.

The touch stone for an organization's readiness is not the sophistication of its AI systems. It is the clarity of its decision architecture. Can you articulate what decisions your agentic systems are authorized to make? Can you define the thresholds at which they must escalate to human judgment? Can you audit and explain the decisions they make? Can you hold yourself accountable for outcomes you did not directly control?

These are not rhetorical questions. They are the foundational questions of the agentic era. And they are urgent. Because the organizations that answer them first will extract disproportionate value from agentic AI. The organizations that fail to answer them will face regulatory exposure, operational chaos, and eventual obsolescence.

The Law of Delegated Authority is not a suggestion. It is a necessity. The question is not whether you will adopt it. The question is whether you will adopt it intentionally, with full recognition of its implications, or whether you will stumble into it through crisis and remediation.

The agentic era is not coming. It is here. And the organizations that succeed in it will be those that have redesigned their decision rights accordingly.


REFERENCES

[^1]: McKinsey & Company. (2025, September). "The Agentic Organization: Contours of the Next Paradigm for the AI Era." Retrieved from https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-agentic-organization-contours-of-the-next-paradigm-for-the-ai-era

[^2]: Palo Alto Networks. (2026, February). "A Complete Guide to Agentic AI Governance." Retrieved from https://www.paloaltonetworks.com/cyberpedia/what-is-agentic-ai-governance

[^3]: Duke Corporate Education. (2025, December). "Why Leaders Are Failing on AI." Retrieved from https://www.dukece.com/insights/why-leaders-are-failing-on-ai/

[^4]: Boston Consulting Group. (2025, December). "How Agents Are Accelerating the Next Wave of AI Value Creation." Retrieved from https://www.bcg.com/publications/2025/agents-accelerate-next-wave-of-ai-value-creation

[^5]: McKinsey & Company. (2025, September). Referenced in Palo Alto Networks governance guide.

[^6]: Deloitte. (2025). Referenced in Duke Corporate Education report.

[^7]: IBM 2025 CEO Study. Referenced in Duke Corporate Education report.

[^8]: European Commission. "EU AI Act." Retrieved from https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

[^9]: OWASP. (2025, December). "OWASP Top 10 for Agentic Applications 2026." Retrieved from https://genai.owasp.org/2025/12/09/owasp-genai-security-project-releases-top-10-risks-and-mitigations-for-agentic-ai-security/

[^10]: Dana Louw. (2026, February). "RACI for AI: Leadership Clarity in Decision Making." LinkedIn. Retrieved from https://www.linkedin.com/posts/dana-louw-mba-47b12112b_leadershipsystems-decisionrights-raci-activity-7425537507469033472-ap95

[^11]: Kernel Growth. (2025, December). "JPMorgan Chase's AI Strategy: $1.5B Savings with Human Oversight." Retrieved from https://kernelgrowth.io/human-oversight-ai-jpmorgan-strategy/

[^12]: Duke Corporate Education. (2025, December). "Why Leaders Are Failing on AI." Retrieved from https://www.dukece.com/insights/why-leaders-are-failing-on-ai/

[^13]: Gartner. (2026, February). "Global AI Regulations Fuel Billion-Dollar Market for AI Governance Platforms." Retrieved from https://www.gartner.com/en/newsroom/press-releases/2026-02-17-gartner-global-ai-regulations-fuel-billion-dollar-market-for-ai-governance-platforms

[^14]: MIT Sloan Management Review and Boston Consulting Group. (2025, January/November). "The Great Power Shift: How Intelligent Choice Architectures Rewrite Decision Rights" and "The Emerging Agentic Enterprise: How Leaders Must Navigate a New Age of AI." Retrieved from https://sloanreview.mit.edu/article/the-great-power-shift-how-intelligent-choice-architectures-rewrite-decision-rights/

[^15]: Duke Corporate Education. (2025, December). "Why Leaders Are Failing on AI." Retrieved from https://www.dukece.com/insights/why-leaders-are-failing-on-ai/


Word Count: 3,247

Classification: Fully Gated Revenue Product

Audience: Board / C-Suite

Publication Status: Ready for Publication

Forensic Discovery × Close

Strategic Reality

Select a pillar to review the forensic discovery and economic correction mandate.

Governance Mandate Sovereignty Protocol

Please select an asset to view framework analytics.

Begin Forensic Audit Review Full Executive Leadership Playbook