Your Organisation Has an AI Operating Model. It Just Doesn't Know It Yet
Ask a senior executive whether their organisation has an AI operating model and the answer will almost certainly be: “We’re working on it.”
They are wrong. The model already exists. It is just not the one anyone designed.
Over the past two years, most organisations have accumulated a sprawling, invisible architecture of AI activity. A fraud detection pilot here. A customer-facing chatbot there. An underwriting model handed over by a vendor and never fully understood. A team in the back office using a large language model to summarise documents, with no one in governance aware it is happening. A data science team with a backlog of models that have no owner once the project closes.
This is your AI operating model. It emerged without intent, without design, and without accountability. And it is accumulating debt at a pace that dwarfs anything in your technical, process or organisational backlogs.
What a Shadow AI Operating Model Looks Like
The shadow AI operating model has a few consistent characteristics, regardless of industry.
Ownership is ambiguous. Models live inside projects, not capabilities. When the project closes, the model is orphaned. No one owns the drift, the retraining cycle, or the regulatory exposure.
Decisions are invisible. AI is making — or heavily influencing — consequential decisions about customers, risk, and operations. Those decisions are not logged, not reviewable, and not traceable back to a policy or control.
Governance is retrospective. Oversight panels and risk functions learn about AI deployments after the fact, often when something has gone wrong or when a regulator asks a question no one can answer.
Value is unmeasured. Pilots proliferate, but almost none are anchored to end-to-end value stream outcomes. Models are assessed on accuracy metrics, not on whether they reduced failure demand, shortened time-to-value, or improved right-first-time for the customer.
Data lineage is broken. The models are consuming data whose provenance, fitness, and ownership are unclear. The AI amplifies the data problem rather than solving it.
If any of this sounds familiar, you are not alone. This is the norm, not the exception.
Why This Matters More Than It Might Appear
The shadow AI operating model is not just an efficiency problem. It is a governance problem, a regulatory problem, and — increasingly — a liability problem.
Shadow AI is not an entirely new phenomenon. It is a subset of shadow IT — the long-standing problem of employees adopting technology outside the visibility of IT and governance teams. But shadow AI is categorically more dangerous. Unlike a rogue spreadsheet or an unsanctioned SaaS tool, a generative AI system actively learns from the data it processes, generates outputs that can influence real decisions, and creates information exposure every time it is used. The risk does not sit still. It compounds.
The FCA’s Consumer Duty expects firms to demonstrate that products and services deliver good outcomes for customers. If AI is influencing those outcomes and no one can explain how, that is a material compliance gap.
The Bank of England and PRA are explicit: model risk must be governed. A model deployed without documented ownership, explainability standards, and monitoring protocols is an unmanaged risk on your balance sheet — even if no one has formally recognised it as such.
And the pace is accelerating. The arrival of agentic AI — systems that do not just make recommendations but take actions — means the shadow operating model will get significantly more complex before anyone notices.
What an Intentional AI Operating Model Needs
An AI operating model is not an AI strategy. Strategy describes ambition. An operating model describes how the organisation actually delivers.
An intentional AI operating model needs four things.
- Capability anchoring. Every AI deployment should be attached to a named capability in your capability map — not to a project. Fraud Detection, Loss Assessment, Customer Communication, Risk Pricing. When AI lives inside a capability, it inherits ownership, governance, and lifecycle management. When it lives inside a project, it becomes an orphan at go-live.
- Decision inventory. You cannot govern what you have not catalogued. Each value stream stage contains repeatable decisions. The inventory asks: which decisions is AI influencing or making? What is the risk classification of each? What is the human override protocol? This is not a compliance exercise — it is the foundation for responsible autonomy.
- Data fitness by stage. AI amplifies the quality of the information it consumes. Value streams expose the core information objects at each stage — Policy, Claim, Party, Coverage, Liability. Assessing data fitness against each AI use case before deployment prevents models from optimising noise.
- Outcome-centric measurement. Model accuracy is a starting point, not an ending point. The metrics that matter are end-to-end: time-to-value, right-first-time, failure demand rate, override rate, leakage. If AI improves a local metric while worsening the flow of value, it has failed — regardless of what the model scorecard says.
How to Start Managing the Risk
Surfacing the shadow model is the first step. Managing it requires four practical interventions.
- Audit usage. Use network monitoring to detect the AI tools already in circulation across your organisation. Most firms are surprised by the variety. Consumer-grade large language models, AI writing assistants, automated research tools — all processing internal data, all outside governance. You cannot manage what you cannot see.
- Establish clear policies. Define which AI tools are approved, which are prohibited, and what the process is for requesting assessment of new ones. Ambiguity is not neutrality — it is permission by default. A one-page AI tool policy, clearly communicated, removes the “I didn’t know” defence and creates the foundation for accountability.
- Provide secure alternatives. People turn to shadow tools because the approved ones are inadequate, too slow to procure, or simply unavailable. If your organisation does not offer a sanctioned generative AI environment, employees will find one. Meet the need before the need finds its own answer. Restricting access without providing a credible alternative simply drives the behaviour further underground.
- Educate with real data. Generic AI awareness training rarely changes behaviour. Use real examples from your own organisation’s shadow AI usage to show employees what is actually at risk — customer data processed by a tool with opaque data retention policies, model outputs that cannot be explained to a regulator, intellectual property embedded in a third-party training corpus. That specificity lands differently than a slide deck about AI ethics.
Where Business Architecture Fits
Business Architecture exists precisely for this moment.
The capability map, the value stream model, the information concept inventory, the decision catalogue — these are not theoretical artefacts. They are the scaffolding an organisation needs to make AI governable, measurable, and safe.
Without this scaffolding, every new AI deployment adds to the shadow operating model. With it, AI becomes something the enterprise can actually own.
The question is not whether your organisation should build an intentional AI operating model. It already has one. The question is whether you are willing to surface it, own it, and design it properly before the regulator, the auditor, or a failure event does it for you.
Time to Act?
So here is my challenge to you: describe your organisation’s current AI operating model in one paragraph. Name the owner, the governance body, the value streams it touches, and the outcome measures in place.
If you cannot, the shadow model is already in charge.
I suspect that conversation alone would be one of the most valuable hours your leadership team spends this year.