The question arrives in the Strategy Office at the worst possible time. The CEO has been told by a board member, a consultant, or a competitor announcement that the company should “buy a capability” or “build a platform.” The pressure is real. The analytical scaffolding is not. The decision that follows is made on conviction, relationship, or momentum rather than on a documented framework the organization can defend, revisit, and improve.
This is the execution gap the Strategy Office exists to close.
This paper presents the framework Touch Stone Publishers Limited has developed for AI portfolio decisions across the build, buy, and partner spectrum. Version 2.0 rebuilds the prior vendor decision framework with five orthogonal inputs, adds regulatory exposure and concentration risk as first-class analytical inputs, and surfaces the ethics and lawful use intersection points that earlier versions did not address. The intended reader is the Chief Strategy Officer or Chief Development Officer who owns this decision process and will present the results to the CEO and the board.
Five Inputs That Do Not Collapse Into Each Other
The central problem with most vendor and build frameworks is dimension collapse. Two inputs that appear distinct turn out to measure the same underlying variable. The rubric produces a score, but the score double-counts the exposure that matters most.
Version 2.0 was rebuilt explicitly to prevent this. Each of the five inputs measures a distinct exposure. None subsumes another.
Strategic differentiation. Does this capability differentiate the company in its market, or is it a commodity? The test runs on three axes: how peers have positioned the same capability, how customers perceive the differentiation, and whether the differentiation holds over a 36-month horizon. A capability that every competitor will deploy within 18 months scores differently than one the company can protect.
Total cost of ownership over 60 months. The 60-month window is not arbitrary. It is the horizon at which switching costs, talent dependency, and accumulated operational risk become visible. The calculation includes vendor consumption costs at expected run rate, build amortization, post-36-month migration cost, talent retention and replacement cost, and the operational risk cost of downtime and integration failure. Switching cost and talent dependency are components of TCO on this horizon, not separate inputs. Treating them as separate inflates the dimension count without adding analytical precision.
Regulatory exposure. This is a new standalone input in v2.0, elevated from a sub-note in the prior version. The EU AI Act compliance clock is running: foundation model designation obligations and high-risk system classification requirements under Annex III take effect August 2, 2026. Beyond the EU, sector regulators are not waiting. Model risk frameworks for banks, the FDA's software-as-medical-device pathway for pharma, HIPAA applicability for health systems, and FCRA exposure for credit underwriting each create a distinct regulatory profile that varies by vendor architecture and build choice. Data sovereignty constraints and export controls on hardware and model weights add to the surface area. The regulatory input scores the full exposure profile of the decision, not just the immediately visible compliance item.
Concentration risk. Also new as a standalone input in v2.0. This is measured at the portfolio level, not the contract level. Vendor concentration at the parent-company level (the consolidated exposure to a single vendor across multiple product lines and business units) is a different risk than any single contract represents. Foundation-model provider concentration is a separate metric within this input: reliance on a single model family or a single provider's family of models is a concentration vector with its own thresholds. When more than 70 percent of model-dependent inference budget runs through a single provider, that exposure is board-visible and requires a documented mitigation plan.
Disclosure exposure. Some vendor architectures preclude the attribution methodology the company will need for its investor disclosures. Some build architectures create capitalization complexity the audit committee will not accept. The disclosure input scores the constraints the decision places on the company's future representations to investors, customers, and regulators. It is forward-looking by design: the question is not what the company has disclosed, but what the decision forecloses.
The five inputs are scored on a 1 to 5 scale using the rubric in the playbook appendix. The score is documented. The audit trail is producible on demand.
When the Decision Is an Acquisition
The Strategy Office is increasingly the owner of a decision that sits between build and buy: acquiring an AI capability outright. The M&A decision runs through the five-input framework and adds three additional inputs specific to the acquisition context.
Acquisition diligence in AI is different from standard M&A diligence in ways that most acquirers have underweighted. Talent retention modeling matters more here than in most sectors because the capability often walks out with the team. Model architecture review requires technical depth that the typical M&A advisor does not bring. Training data provenance review is not optional: copyright exposure and unlicensed data claims in the target's training corpus become the acquirer's exposure at close.
Integration cost deserves its own line. Published M&A research across sectors suggests that acquired-team productivity drops by 40 to 60 percent in year one at most acquirers. AI capability acquisitions, where the product is inseparable from the people who built it and the culture that sustained it, face this dynamic at least as severely. The integration cost model runs the full 36-month integration horizon.
Inherited disclosure exposure is the input most often missed. The acquirer inherits the target's prior representations to investors, customers, and regulators about its AI capabilities. If those representations were aggressive, the acquirer owns that aggressiveness. Training data provenance claims, AI performance representations, and any prior FTC or regulatory correspondence are part of the diligence package, not a post-close discovery.
The Strategy Office's recommendation on an acquisition goes to the CEO and the board. The audit committee reviews the disclosure exposure inheritance analysis. The General Counsel concurs before the recommendation is final.
The Portfolio Is Not Static
A vendor or build decision that was correct at the time it was made can become incorrect. Markets move. Costs diverge from plan. Regulation arrives faster than modeled. The quarterly portfolio revisit is the mechanism that catches this before the exposure becomes a board-level problem.
Three triggers force a revisit before the regular quarterly cycle. Any major decision running more than 15 percent above plan on 60-month TCO. Any major decision where the original differentiation has been eroded by market commoditization or where a new opportunity has materially changed the calculus. Any new guidance from the SEC, the EU AI Office, a sector regulator, or a proxy advisor that materially affects the architecture.
The quarterly revisit produces a one-page summary per major decision. The CIO and CFO sign. The audit committee sees the summary.
The annual board briefing covers six points: portfolio composition (build, buy, partner allocation by category), portfolio performance (realized P&L impact attributed to each major decision), portfolio risk (vendor, talent, technology, and regulatory concentration), portfolio forward look (decisions in the next 12 months with framework inputs that will drive each), competitive intelligence (material changes in competitor posture), and a directional recommendation for portfolio rebalancing. The board approves the directional recommendation.
Where Ethics and Lawful Use Intersect the Framework
Branch 9, which covers AI ethics, bias, and lawful use, intersects Strategy Office decisions at two specific points.
Vendor diligence under the five-input framework includes a Branch 9 review of training data provenance, bias audit availability, and conformity to applicable jurisdictional law. Those findings flow directly into the regulatory exposure and disclosure exposure inputs of the rubric. They are not a separate ethics checklist appended after the scoring is done.
Internal builds involving customer-facing AI (pricing, lending, underwriting, recommendation engines) are subject to Branch 9 testing protocols before deployment. The Strategy Office surfaces this dependency in any build recommendation that involves customer-facing AI. The protocol is not a gate that slows the recommendation; it is a component of the build cost and timeline that makes the recommendation accurate.
What This Framework Does
The build, buy, and partner decisions that determine an organization's AI capability profile are among the most consequential capital allocation decisions the Strategy Office will make. They compound. A vendor decision made on relationship quality rather than TCO analysis becomes a concentration risk in year three. An acquisition that skipped diligence on training data provenance becomes an inherited disclosure problem in year two.
The five-input framework does not eliminate these risks. It makes them visible, documentable, and manageable before they become problems. Individually defensible decisions that have been collectively chaotic become a portfolio managed at the same standard as any other material capital allocation. That is what the Strategy Office is for.
The organization whose Strategy Office is doing this work enters the next 36 months of AI execution with a portfolio it can defend to its board, its regulators, and its shareholders. The one that is not doing this work is accumulating exposure it cannot yet see.
Glenn E. Daniels II is the founder of Touch Stone Publishers Limited.