Operational Resilience after the Deadline: Turning Paperwork into a Habit (UK)

On 31 March 2025, the UK operational resilience transition period ended. For many firms, that date quietly changed the question from “Are we ready for the deadline?” to “Can we keep proving we’re resilient—week in, week out?”

That distinction matters. Programmes finish. Resilience does not. The FCA’s core expectation remains straightforward but demanding: firms should be able to stay within their impact tolerances for each Important Business Service under severe but plausible disruption, supported by appropriate mapping, testing, investment and Board oversight.

In practice, many organisations are now discovering a new kind of “resilience debt”: artefacts created to pass a milestone that are not yet embedded into business-as-usual decision-making. The result is a familiar annual scenario testing scramble, late discovery of third-party fragility, and Board packs that describe activity rather than provide confidence.

This is where Business Architecture can move the conversation beyond documentation and into an operating model.

What "operational resilience" means in plain English?

Operational resilience is your organisation’s ability to:

  • keep the most important services working (or recover them quickly),
  • when something serious happens (IT outage, supplier failure, cyber incident, building access issue, major process breakdown),
  • and to stay within limits you’ve agreed in advance.

Those limits are usually called impact tolerances. Think of them as: “How much disruption is the organisation prepared to tolerate before customers, markets, or the firm itself are seriously harmed?”

So operational resilience isn’t the same as “IT resilience.” It includes people, processes, suppliers, and decisions, as well as technology.

Why the period after the deadline is the real test?

Many organisations treated the run-up to March 2025 like a project:

  • define the “important services”
  • create maps and documents
  • run tests
  • produce Board packs

That’s understandable. But once the milestone passes, a new risk appears:

You can end up with a set of documents that look good, but aren’t used day to day.

If that happens, three things tend to follow:

  1. Change breaks resilience quietly. The organisation evolves—systems are upgraded, suppliers change, teams reorganise—and yesterday’s resilience picture becomes out of date.
  2. Testing becomes a yearly panic. Instead of steady learning, the organisation runs “big bang” tests once a year, finds issues late, and rushes to fix them.
  3. Board assurance becomes thin. Reports describe activity (“we completed X tests”) rather than confidence (“we can stay within tolerance because…”).

The FCA’s intent is not “produce artefacts.” It is “be able to remain within tolerance under severe but plausible disruption,” supported by mapping, testing and governance.

A helpful mindset shift: treat “Important Business Services” as real services (not labels)

Regulators often use the phrase Important Business Service. Don’t let the wording put you off. It simply means:

A service that matters so much that if it fails badly, customers or markets are harmed.

Examples (illustrative, not a checklist):

  • making and receiving payments
  • claims handling
  • customer access to essential support
  • trading and settlement activities

Now for the key point: To manage resilience properly, you have to treat these as end-to-end services, not as a department or a system.

An end-to-end service has multiple moving parts:

  • front-line teams
  • processes and handoffs
  • data and technology
  • suppliers and outsourced partners
  • controls and approvals

When you see it this way, resilience becomes a practical management topic: What must work, and what do we do when it doesn’t?

What an "always-on" resilience approach looks like?

Think of this as moving from “a folder of documents” to “a habit built into how we run the business.”

1. Name the service clearly and assign one accountable owner

Each important service needs:

  • a clear description customers would recognise
  • a single accountable business owner (not “shared by everyone”)

This avoids the classic failure mode: everyone contributes, no one owns.

2. Agree the disruption limit in business terms

Instead of technical metrics, describe tolerance in terms people understand, for example:

  • “customers can still submit a claim and receive acknowledgement within X”
  • “payments can be processed within Y”
  • “service can be restored within Z”

This makes it something leaders can discuss and fund.

3. Identify what the service depends on (and keep that list current)

You do not need a complicated diagram to begin with. Start with a disciplined list:

  • key teams / roles
  • key processes
  • key systems and data
  • key locations
  • key suppliers / outsourced services

Most firms know this at a high level. The value comes from keeping it up to date—especially when change happens.

4. Decide where you will detect, contain, and recover

Resilience is not only “restart the system.” It includes:

  • early warning signals (what tells us the service is degrading?)
  • containment (how do we stop impact spreading?)
  • workarounds (can we operate manually for a period?)
  • communications (how do we keep customers informed?)
  • recovery steps (what comes back first, and why?)

This is where resilience becomes operationally real.

5. Test in a steady rhythm, not once a year

Most organisations can avoid the “annual scramble” by adopting a simple cadence:

  • regular, smaller tests focused on real weak points
  • tests triggered by major changes (new supplier, migration, reorganisation)
  • learning captured and turned into a funded backlog

This makes testing feel less like an exam and more like maintenance.

6. Turn “issues found” into a managed investment plan

Every test finds gaps. The organisations that mature well do one thing differently:

They treat resilience fixes like a portfolio of investments, prioritised by:

  • severity of harm if it fails
  • how close you are to breaching tolerance
  • cost and time to fix

That gives the Board a clear choice: fund it, accept the risk, or change the service design.

Where Business Architecture helps

Business architecture, done well, is simply structured thinking about:

  • what the business does end-to-end
  • what it relies on
  • who owns what
  • what needs to change and why
  • how to prioritise investment

For operational resilience, that translates into very practical outcomes:

  • services are defined consistently
  • dependencies (including suppliers) are visible
  • controls are placed in the right parts of the service
  • testing becomes repeatable
  • Boards get confidence, not just activity updates

No one needs to know modelling notation for this to work.

A quick self-check: are you “always-on” yet?

If you answer “no” to any of these, you’re likely still in “deadline mode”:

  • Do we update our service dependencies when major change happens?
  • Do we have a routine testing calendar (not a once-a-year event)?
  • Can we explain, simply, how we’d keep each important service within tolerance?
  • Do we manage resilience improvements as a prioritised investment backlog?
  • Does the Board pack clearly show confidence levels and key risks?

The opportunity post March 2025

The transition period forced organisations to take operational resilience seriously. The next step is making it ordinary—part of business-as-usual decision-making.

The simplest way to do that is to stop treating “Important Business Services” as compliance labels and start treating them as real services with real owners, real dependencies, real controls, and a steady routine of testing and improvement.

That is what an “always-on” resilience operating model looks like.