From Rules to Resilience – The Governance Ladder from Formal Compliance to Substantive Control

Last Updated on 19/02/2026 by 75385885

The Governance Compliance Ladder from Formal Compliance to Substantive Control

Regulators no longer ask, “Are you compliant?”
They ask, “Can you prove your controls work under stress?”

That shift — subtle in language, radical in consequence — defines modern governance. Whether we speak about DORA, AI regulation, CSRD assurance, AML supervision or operational resilience, the underlying supervisory expectation is the same: documentation is not enough.

Compliance is not binary. It exists on a ladder.

Many organisations believe they are compliant. Few can demonstrate substantive control when challenged. And fewer still can show that the regulatory intent — resilience, fairness, transparency, protection of stakeholders — is achieved in practice.

This article explains the full governance ladder:

  1. Rules-Based Compliance

  2. Design-Level Compliance

  3. Operating Effectiveness

  4. Demonstrable Control

  5. Substantive Compliance

Each level builds on the previous one. But each also exposes the limitations of the one below.


Level 1 – Rules-Based Compliance

“Have we implemented the rule?”

Rules-based compliance is the most visible and often the most comforting form of governance. It is built on prescriptive regulation. Articles are mapped. Policies are written. Responsibilities are assigned. Regulatory paragraphs find their mirror image in internal procedures.

In a Digital Operational Resilience Act (DORA)-context, for example, a mid-sized financial institution may produce:

  • A complete mapping of DORA articles to internal ICT policies.

  • An incident reporting procedure aligned with regulatory timeframes.

  • A third-party risk management framework referencing the relevant technical standards.

  • A board-approved operational resilience policy.

From a documentary perspective, everything exists. The rule has been implemented.

But this is only the first rung.

Imagine the same institution experiences a significant cyber incident. The reporting template exists, but no one has ever tested the internal escalation chain. The board receives fragmented information. Third-party service providers are slow to respond. The documentation is compliant — yet the organisation struggles in real time.

The weakness of rules-based compliance is not that it is wrong. It is necessary. Without it, governance collapses into improvisation.

Its weakness lies elsewhere: it answers the question “Have we implemented the rule as written?” but not “Does the organisation function under pressure?”

Rules-based compliance creates structure. It does not guarantee resilience.

Read more on DORA in our blog: DORA and the Boardroom – Why Digital Operational Resilience Has Become a Core Governance Responsibility.


Level 2 – Design-Level Compliance

“Are our controls properly designed?”

The second rung moves beyond rule mapping and asks whether controls are conceptually adequate. This is familiar territory in internal control frameworks such as COSO or SOX environments. The distinction between “design effectiveness” and “operating effectiveness” has been part of financial control discourse for decades.

Governance compliance ladder
It is not real Piet Mondriaan

Design-level compliance means the architecture of control is sound on paper.

Consider a bank implementing segregation of duties in its payments process. The design requires:

  • Initiation, approval and release to be performed by separate individuals.

  • System permissions configured accordingly.

  • Exceptions logged and reviewed.

Or consider an AI governance framework in a technology company. The design requires:

  • AI systems to be registered in a central inventory.

  • High-risk models to undergo formal risk assessment.

  • Human-in-the-loop oversight for sensitive decisions.

On paper, these designs are robust. They reflect regulatory intent. They address risk logically.

Yet design adequacy can coexist with practical weakness.

In the payments example, system configurations may not actually enforce the intended segregation. Temporary access rights may accumulate. Overrides may not be independently reviewed.

In the AI example, the inventory may exist but be incomplete. Risk assessments may be templated exercises rather than substantive analysis. Human oversight may exist in theory but be operationally marginal.

Design-level compliance corrects the most superficial weakness of rules-based compliance: it ensures that the architecture is not merely present, but logically constructed.

However, it still lives in abstraction.

Controls may be beautifully designed — and still fail in practice.

Here is a guide by the European Banking Authority: Guidelines on ICT and security risk management.


The Subtle Trap Between Level 1 and Level 2

Many boards believe that once policies are aligned with regulation and control frameworks are designed properly, the organisation is compliant in a meaningful sense.

This is a governance illusion.

Documentation and design are preconditions of compliance. They are not evidence of functioning control.

In fact, regulators increasingly view design-level adequacy as a baseline expectation. It is no longer impressive. It is assumed.

The supervisory conversation has moved upward.

The real questions now are:

  • Do controls function consistently?

  • Can they withstand change, stress and scale?

  • Can management prove that they work?

  • And most critically: does the organisation achieve the intended regulatory outcome?

To answer those questions, we must climb further.


Level 3 – Operating Effectiveness

“Do our controls function in practice?”

Operating effectiveness moves governance from theory to behaviour. It asks whether controls actually operate as intended during normal business conditions.

This is the classic internal audit distinction:

  • A control may be well designed.

  • But does it function consistently, reliably and without circumvention?

Let us take a concrete example: ESG reporting under CSRD.

A large energy company implements a comprehensive emissions inventory process:

  • Scope 1 and 2 data collected monthly.

  • Scope 3 data estimated using established methodologies.

  • Data validation checks embedded in spreadsheets.

  • Quarterly internal review by the sustainability committee.

Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder

Under steady-state conditions, the system works. Reports are produced on time. External auditors find minor issues but no structural deficiencies. The company is compliant. Controls operate.

Now introduce change.

The company acquires a foreign subsidiary mid-year. Its ERP system is different. Emissions factors differ. Data definitions are not aligned. Integration is rushed to meet reporting deadlines.

Suddenly:

  • Validation checks no longer capture inconsistent unit conversions.

  • Scope 3 assumptions are applied without local adaptation.

  • Reconciliations break down.

The control design did not change. But the operating context did. The system functioned under normal conditions, yet lacked resilience.

Operating effectiveness is therefore conditional. It answers:

“Do controls work in the environment they were designed for?”

It does not answer:

“Do they work when the environment shifts?”

In financial services, similar dynamics occur under DORA. An incident response procedure may function during minor disruptions. But when a critical cloud provider fails across multiple jurisdictions, escalation paths overload, communication lines freeze, and documentation trails fragment.

Operating effectiveness under routine conditions is necessary — but fragile.


The Illusion of Stability

At this stage, many organisations declare victory.

Internal audit has tested controls. Sample testing shows evidence. Reports indicate satisfactory performance. Management signs off.

But operating effectiveness testing is often backward-looking and limited in scope. It tests whether controls worked in the past — not whether they will withstand future volatility.

And regulators are increasingly aware of that limitation.

Which leads us to the fourth rung.


Level 4 – Demonstrable Control

“Can we prove that our controls work — especially under scrutiny?”

Demonstrable control introduces a qualitative shift. It is not merely about controls functioning; it is about the ability to evidence, reproduce and defend that functionality.

Under DORA, this shift is explicit. Supervisors do not simply request policies. They request:

  • Incident logs.

  • Penetration test results.

  • Threat-led testing evidence.

  • Third-party concentration analysis.

  • Board-level challenge documentation.

Demonstrable control requires traceability.

Consider a technology firm deploying AI models for credit scoring.

At Level 3 (Operating Effectiveness), the firm can show:

  • Risk assessments were conducted.

  • Human review processes exist.

  • Monitoring dashboards track model performance.

At Level 4 (Demonstrable Control), the firm must also show:

  • Version history of the deployed model.

  • Documentation of training data lineage.

  • Logs of human override decisions.

  • Evidence of drift monitoring and corrective action.

  • Escalation records when anomalies occurred.

In other words, the organisation must be able to reconstruct the governance chain.

When a regulator asks, “Why was this customer denied credit?”, the answer cannot be “The model predicted high risk.” It must be traceable, reproducible and defensible.

The same applies to operational resilience.

Imagine a bank that claims it can recover from a major ICT outage within four hours. At Level 3, the process functions during periodic tests. At Level 4, the bank must provide:

  • Results of scenario-based stress tests.

  • Evidence of fallback systems functioning.

  • Logs demonstrating actual recovery times during simulated disruption.

  • Documentation showing board oversight of tolerance levels.

Demonstrable control transforms compliance from declarative to empirical.


Why Demonstrable Control Is Not Yet Substantive Compliance

Even demonstrable control has limits.

An organisation may be able to prove that controls operate and that testing occurs. Yet the regulatory objective may still not be achieved in substance.

For example:

  • A financial institution may test cyber resilience annually and document all results.

  • Stress scenarios may be designed conservatively to avoid embarrassment.

  • Third-party risk assessments may be performed but avoid challenging strategic vendor concentration.

All controls function. Evidence exists. Yet the organisation remains structurally fragile.

Demonstrable control proves that governance mechanisms exist and operate. It does not automatically prove that the organisation is resilient in the real world.

That final leap requires moving beyond control evidence to outcome integrity.

Read more from KIWA, a Dutch TIC-company (Testing, Inspection and Certification), on: ISAE 3402: Demonstrable IT risk assurance.


The Climb Becomes Harder

Climbing from Rules-Based to Design-Level is procedural.

Climbing from Design-Level to Operating Effectiveness is behavioural.

Climbing from Operating Effectiveness to Demonstrable Control is evidential.

But climbing to the final rung — Substantive Compliance — is existential. It forces the organisation to confront whether it truly meets the regulatory intent.

And that is where supervisory philosophy has decisively moved.


Substantive Compliance and the Shift from Compliance Culture to Control Culture

We have climbed four levels:

  1. Rules-Based Compliance

  2. Design-Level Compliance

  3. Operating Effectiveness

  4. Demonstrable Control

Each level strengthens governance. Each closes a gap left by the previous one.

But none of them, in isolation, guarantees that the organisation truly achieves what regulation is meant to secure.

That final step requires moving from proof of control to proof of outcome.


Level 5 – Substantive Compliance

“Have we achieved the regulatory intent in practice?”

Substantive compliance is the highest rung on the ladder. It is not about documentation. It is not about architecture. It is not even about evidence of functioning control.

It is about outcome integrity.

Substantive compliance exists when the organisation, under stress and scrutiny, still fulfils the objective the regulation was designed to secure.

Let us return to DORA.

DORA’s intent is operational resilience — continuity of critical services, even under severe ICT disruption.

A large financial institution may demonstrate:

  • Comprehensive rule mapping (Level 1).

  • Robust ICT risk design (Level 2).

  • Controls operating under normal conditions (Level 3).

  • Evidence of stress testing and incident logs (Level 4).

Yet imagine a real systemic cloud outage occurs.

  • Multiple critical functions depend on a single provider.

  • Recovery time objectives are technically met — but only after customer disruption.

  • Concentration risk had been documented but never structurally addressed.

The organisation can show compliance at every lower level. But the regulatory intent — continuity without material customer harm — has not been achieved.

Substantive compliance asks a harder question:

Did governance prevent harm, or did it merely document vulnerability?

The same logic applies in AI governance.

An institution may:

  • Maintain an AI inventory.

  • Perform risk assessments.

  • Document human oversight.

  • Log overrides and monitor drift.

Yet if systemic bias persists in lending decisions despite these controls — if certain groups are disproportionately rejected without adequate mitigation — the institution may be compliant in structure but non-compliant in substance.

Substantive compliance requires that regulatory purpose — fairness, resilience, transparency — is actually realised.

Read more in our blog: AI-Governance: From Experiment to Executive Accountability or read reseach output from the Erasmus University in the Netherlands on: Notified and substantive compliance with EU law in enlarged Europe: evidence from four policy areas.


The Governance Ladder in One View

To clarify the progression, the ladder can be summarised as follows:

Governance compliance ladder
From Rules to Resilience – The Governance Ladder from Formal Compliance to Substantive Control 11

Each level incorporates the previous one. But each exposes its limitation.


The Regulatory Inflection Point

Supervision across sectors is moving decisively upward on this ladder.

In financial services, supervisors increasingly ask not for policies, but for stress evidence. In ESG reporting, assurance providers no longer accept narrative declarations; they request data lineage and recalculation ability. In AI regulation, transparency and oversight are insufficient without demonstrable impact control.

This is not regulatory activism. It is systemic realism.

Complex systems — ICT infrastructures, global supply chains, machine learning models — do not fail at the documentation level. They fail at interaction points, under stress, and across dependencies.

Therefore regulators escalate their expectations from compliance in form to compliance in function — and ultimately to compliance in outcome.

Organisations that remain anchored at the lower rungs may feel compliant. Under scrutiny, they discover they are merely prepared for inspection, not for disruption.


Practical Implications for Boards

Boards must explicitly locate their organisation on this ladder.

Five practical questions can anchor that discussion:

  1. Are we satisfied with rule mapping, or have we tested performance under stress?

  2. Do we distinguish between control design and control operation?

  3. Can we reproduce and defend decisions months later with full traceability?

  4. Have we tested systemic dependencies — not just isolated controls?

  5. If a regulator or court examined a real-world failure, could we demonstrate that governance genuinely worked?

The danger lies not in non-compliance, but in overconfidence.

Many governance failures occur in organisations that genuinely believed they were compliant.


Governance compliance ladder Rules-based vs principles-based compliance Design-level compliance vs operating effectiveness Demonstrable control under DORA Substantive compliance meaning Control effectiveness vs formal compliance

Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder Governance compliance ladder

From Compliance Culture to Control Culture

Rules-based compliance creates order.
Design-level compliance creates structure.
Operating effectiveness creates discipline.
Demonstrable control creates defensibility.
Substantive compliance creates legitimacy.

The shift regulators are driving — particularly under DORA and similar frameworks — is a shift from compliance culture to control culture.

Compliance culture asks:
“What does the rule require?”

Control culture asks:
“Does the system hold when it matters?”

That difference determines whether governance is cosmetic or consequential.

Substantive compliance is not achieved through documentation alone. It requires:

  • governance maturity,

  • willingness to test assumptions,

  • board-level engagement,

  • challenge of management comfort,

  • and structural courage to address inconvenient vulnerabilities.

In the end, the governance ladder is not a regulatory checklist. It is a diagnostic tool.

Where you stand on it determines not only supervisory outcomes — but organisational resilience, stakeholder trust and strategic durability.

FAQ’s – Governance Compliance Ladder: From Rules-Based Compliance to Substantive Control

FAQ 1 – What is the difference between rules-based compliance and substantive compliance?

Consolidated and unconsolidated financial statements

Rules-based compliance focuses on implementing prescriptive regulatory requirements. It ensures that policies, procedures and documentation align with the letter of the law. Substantive compliance goes further. It asks whether the organisation actually achieves the regulatory intent in practice — particularly under stress. An organisation may be fully rules-compliant yet still fail to prevent operational disruption, biased decision-making or systemic risk exposure. Substantive compliance requires that governance works in reality, not only on paper.

FAQ 2 – Why is design-level compliance not sufficient for regulators?

climate change governance CSRD

Design-level compliance ensures that controls are logically structured and appropriately documented. However, regulators increasingly distinguish between control design and control performance. A control may appear robust in documentation but fail during system migration, crisis or rapid growth. Supervisors therefore look for evidence that controls function consistently and can withstand stress. Design adequacy is now treated as a baseline expectation, not a mark of maturity.

FAQ 3 – What does demonstrable control mean under DORA?

Hannah Ritchie climate book

Under DORA, demonstrable control means that institutions must provide evidence that ICT risk controls operate effectively. This includes incident logs, testing results, threat-led penetration testing, resilience simulations and third-party risk documentation. It is not enough to state that procedures exist. Institutions must show that controls are monitored, tested and capable of functioning under disruption. Demonstrable control represents a shift from declarative compliance to evidence-based supervision.

FAQ 4 – How does operating effectiveness differ from demonstrable control?

realistic climate optimism

Operating effectiveness confirms that controls function during normal business operations. Demonstrable control goes further by requiring evidence that controls remain effective under scrutiny and stress. Operating effectiveness may be sample-based and backward-looking. Demonstrable control requires traceability, reproducibility and forward-looking resilience testing. The difference lies in the depth and robustness of proof.

FAQ 5 – Can an organisation be compliant but not resilient?

polder model’s problems

Yes. An organisation may comply with all documented regulatory requirements yet still lack systemic resilience. For example, it may maintain required policies and conduct testing, but fail to address concentration risk or interdependencies. Compliance without resilience reflects maturity at lower levels of the governance ladder. True resilience requires substantive compliance — achievement of regulatory purpose in real-world conditions.

FAQ 6 – How should boards assess where they stand on the governance ladder?

can the polder model be renewed

Boards should evaluate governance through escalating questions:
– Have rules been implemented?
– Are controls well designed?
– Do they operate consistently?
– Can effectiveness be evidenced?
– Does the organisation achieve regulatory outcomes under stress?

Board oversight must move beyond documentation review toward stress testing, scenario analysis and challenge of systemic vulnerabilities. Governance maturity depends on the ability to answer all five questions convincingly.

Governance compliance ladder