Capability Outcomes > Capability Maturity
Thanks to all for your comments on Rebooting Business Capabilities. There were some very interesting observations as well as questions raised on the capability SLOs. I have tried to provide an answer to most of the queries in this article.
We’ve all seen the 1–5 “maturity” slides. They look scientific, but they rarely change investment decisions this quarter. Leaders fund outcomes—speed, cost, risk, resilience, experience—not abstract levels. This article replaces maturity scores with a simple, decision-ready system: Outcome Tiers tied to SLOs and unit cost.
Why ditch maturity scores?
- They’re subjective (long questionnaires, little agreement).
- They’re slow to refresh and easy to game.
- They don’t tell finance what to fund next or by how much.
Outcome Tiers solve this by anchoring each capability to a few hard numbers: SLOs (speed/quality/resilience), unit cost, and control coverage—with clear triggers to move up a tier.
Define Outcome Tiers
I have used this before in an engagement and this has worked.
Tier 0 — Unstable
- SLO breaches are common; incidents frequent
- No single owner; data unreliable
- Unit cost unknown; control gaps open
Tier 1 — Managed
- SLOs defined; ≥70% met
- Single named owner; basic monitoring
- Unit cost estimated (±30%)
- Core controls in place; evidence partial
Tier 2 — Efficient
- SLOs met ≥90% of the time
- Unit cost benchmarked and trending down
- Duplicate systems/processes reduced
- Control coverage ≥95% with auditable evidence
Tier 3 — Advantage
- SLOs consistently exceeded; visible in customer/colleague NPS
- Unit cost top quartile; elasticity/scalability proven
- Capability productised and reused across domains
- Controls “by design” with real-time evidence
Tip: Keep 3–5 metrics per capability. If you need more to tell the story, your story isn’t crisp enough.
Build the upgrade path (what moves the capability)
For each capability, pre-agree the 2–3 changes that unlock the next tier.
Example: Customer Onboarding (current Tier 1 → Tier 2)
- SLO gaps closed: median time ≤ 2 days; first-time-right ≥ 95%
- Cost action: consolidate to single IDV + address service (retire 2 vendors)
- Control uplift: QA fails ≤ 2%; evidence fully automated
- Owner commitment: publish SLO card monthly; embed in product OKRs
Example: Payments Execution (Tier 2 → Tier 3)
- Experience: P1 latency ≤ 200ms p95; failure rate ≤ 0.1%
- Resilience: active-active across 2 regions; failover ≤ 60s tested quarterly
- Economics: unit cost top quartile vs peers; price/chargeback visible to consumers
- Reuse: 80% of new flows adopt shared payments capability
Funding triggers (rules finance can use)
Tie money to clear moves up the tier ladder:
- Run-rate payback: Fund upgrades that hit Tier-move metrics with <12-month payback (via duplicate spend retired, FTE capacity released, vendor consolidation).
- Control criticality: Immediate funding when a Tier-move closes red risks or audit findings.
- Reuse multiplier: Premium funding when upgrade enables ≥2 domains to reuse (shared platform/product).
- Stop rule: Freeze new spend on parallel tech once a shared capability reaches Tier 2 (exceptions expire in 90 days).
Make these explicit in portfolio governance so decisions are fast and repeatable.
Reporting Format (one page leaders will read)
Publish monthly; present at 30/90/180 days. One page per domain.
A. Tier Snapshot
- Capabilities listed with current Tier and arrow (↑ / → / ↓)
- Short note: what moved and why
B. KPI Strip (per capability)
- SLO attainment (%), unit cost (£/unit), control coverage (%)
- Trend sparkline (last 3 months)
C. Funding & Outcomes
- £ approved this period → £ run-rate saved / risk points closed
- Systems retired (count), reuse adoptions (count)
D. Next Moves
- 2–3 upgrades (with dates) that unlock the next tier
Rule: one slide, no exceptions. Links can go to detail, but the decision fits here.
Example Outcome Heatmap
Domain: Customer Growth (Month-End)
Note: Color Code by Tier (0 - 3) and add ↑/→/↓ for momentum. The Next Move column is your funding conversation.
Minimal artifacts you actually need
- Capability SLO Card (front page for each capability)
- Outcome Tier Register (one line per capability)
- Upgrade Backlog (only items that move a tier)
- Duplicate Spend Log (systems retired, vendors consolidated)
Everything else is optional.
Avoiding Ivory tower Discussions
Here’s a timeline to maintain the cadence (and avoid unnecessary discussions):
Week 1 – 4: Baseline & Focus
- Pick one domain (e.g., onboarding, claims, payments).
- Write SLO cards for top 3–5 capabilities; estimate unit cost; assess control coverage.
- Assign a Tier (0–3) and publish the first heatmap.
Week 5 – 8: Prove Movement
- Agree upgrade path per capability (what unlocks the next tier).
- Fund two actions that deliver run-rate savings or control closure.
- Retire at least one duplicate system/vendor; log the savings.
Week 9 – 12: Normalise and scale
- Make the Outcome Heatmap the default report in portfolio reviews.
- Tie funding approvals to Funding Triggers.
- Extend to the next 3–5 capabilities or second domain.
Some challenges that I faced (and may be you’d too!)
- “What if we don’t know unit cost yet?” Use a credible estimate (±30%) and improve it. One believable number beats a 40-tab spreadsheet no one trusts.
- “Won’t tiers vary by context?” Yes—tiers are per capability, with context in the SLO card. The rubric keeps language consistent while allowing local targets.
- “How do we avoid gaming?” Publish the heatmap, the duplicate spend log, and before/after results. Transparency beats gaming.
And before I end this, here's something to remember:
Outcome Tiers shift the conversation from "how mature are we?" to "what do we fix next, for how much and by when?"
That's how capability models earn funding - and trust.