From Controls to Currency: The Case for Quantified Risk in Security Architecture

Tobias Christen

Every security team I have worked with over the past twenty years has had the same problem. They can tell you which controls they have implemented. They can show you compliance dashboards, maturity scores, and audit findings. What they cannot do is answer the one question the board actually cares about: how much risk are we carrying, in money?

This is not a knowledge gap. It is a structural gap. The frameworks we use to design and assess security — NIST 800-53, ISO 27001, CIS Controls — operate in the language of controls. The business operates in the language of money. Between the two sits a translation problem that most organisations solve with gut feel, traffic-light RAGs, and the occasional expensive consultant.

There is a better way. It is called quantified risk analysis, and it is coming to OSA.

The Bridge: From Maturity Scores to Financial Risk
The Bridge: From Maturity Scores to Financial Risk

The Controls Trap

Controls-based security has been the dominant paradigm since the early 2000s. Pick a framework. Map your controls. Score your maturity. Report to the board. It works — up to a point.

The problem is that controls are binary or ordinal. You either have multi-factor authentication or you do not. Your patch management maturity is a 3 out of 5. Your network segmentation is partially implemented. These statements are true and useful for operational teams, but they do not tell you what it means for the business.

Consider two organisations. Both score 3 out of 5 on access control maturity. One is a SaaS startup with 50 employees and no regulated data. The other is a global bank processing billions in daily transactions. The same maturity score carries fundamentally different risk implications. Controls without context are just numbers.

OSA's maturity assessments give you those numbers — and they are a genuine step forward from spreadsheets and consultancy PDFs. But maturity scores are the starting point, not the destination. The destination is understanding what those scores mean in terms of probable loss.

What Quantified Risk Analysis Actually Is

Quantified risk analysis replaces qualitative risk ratings (high/medium/low) with probabilistic financial estimates. Instead of saying "the risk of a data breach is high", you say "there is a 15-30% probability of a data breach in the next 12 months, with an expected loss magnitude between 2 and 15 million euros."

The most established methodology for this is FAIR — Factor Analysis of Information Risk. FAIR decomposes risk into two components:

  • Loss Event Frequency: How often will a threat event occur? This is further decomposed into threat event frequency (how often is the threat active) and vulnerability (how likely is the threat to succeed given existing controls).
  • Loss Magnitude: When it does occur, how much does it cost? This includes primary losses (response, replacement, fines) and secondary losses (reputation, competitive advantage, legal liability).

Both components are expressed as probability distributions, not point estimates. The output is a range — a Monte Carlo simulation of probable annual loss. This is the language finance teams, actuaries, and boards already speak.

Why This Matters Now

Three trends are converging to make quantified risk analysis urgent rather than aspirational.

First, regulatory pressure. DORA requires financial entities to quantify ICT risk and report it in financial terms. The SEC's cybersecurity disclosure rules demand material risk assessment. NIS2 mandates risk-based security measures proportionate to the risk posed. Regulators are no longer satisfied with qualitative heat maps.

Second, board accountability. Directors are increasingly personally liable for cyber risk governance. A traffic-light dashboard is not a defensible basis for fiduciary duty. A probabilistic loss model, backed by data, is.

Third, resource allocation. Security budgets are finite. A CISO who can say "investing 500K in privileged access management reduces our expected annual loss by 3 million" gets budget. A CISO who says "privileged access is a high risk and we need to improve our maturity" gets asked to come back with a business case.

Threats Don't Stand Still

There is a fourth pressure that is less discussed but arguably the most important: the threat landscape is not static, and our risk models need to reflect that.

Consider what has changed in just the last decade. In 2015, ransomware was a nuisance — encrypt a laptop, demand a few hundred dollars in Bitcoin. By 2020, it was a multi-billion dollar criminal industry with Ransomware-as-a-Service operators running affiliate programmes, SLAs, and customer support desks. In 2023, Cl0p exploited MOVEit to compromise 2,500 organisations without encrypting a single file — pure data exfiltration and extortion. In 2025, AI-assisted reconnaissance and polymorphic payloads are compressing attacker dwell times from weeks to days.

Each of these shifts fundamentally changes the loss event frequency side of the FAIR equation. A maturity score of 4 out of 5 on your endpoint protection meant something materially different in 2018 — before credential stuffing at scale, before deepfake-assisted social engineering, before ransomware groups started filing SEC complaints against their own victims. The control has not changed. The threat it faces has.

This is the problem with static assessments. A maturity score is a snapshot. It tells you how well your controls are implemented at a point in time, but it does not tell you whether those controls are still adequate against the threats they were designed to address. A network segmentation architecture that was robust against 2019 lateral movement techniques may be insufficient against 2026 techniques that exploit cloud identity federation to bypass network boundaries entirely.

In FAIR terms, this means loss event frequency distributions need to be continuously recalibrated against current threat intelligence — not set once during an annual risk assessment and left to decay. The threat event frequency for credential-based attacks has increased by an order of magnitude in five years. Any risk model that uses 2021 frequency estimates in 2026 is not conservative — it is wrong.

This has a direct implication for how we build risk quantification into OSA. The platform cannot just translate today's maturity scores into today's risk numbers. It needs to incorporate threat evolution — connecting OSA's pattern-level threat models to external threat intelligence so that risk estimates stay current as the landscape shifts. A reassessment should not only capture changes in your controls. It should capture changes in the threats those controls face.

This is why one-off risk assessments, no matter how rigorous, are insufficient. Quantified risk needs to be continuous, threat-informed, and architecturally grounded. That is what we are building toward.

The Missing Bridge

Here is where it gets interesting for security architects.

FAIR and its quantified risk peers are excellent at modelling risk in financial terms. But they have a blind spot: they do not tell you which controls to implement, in what order, at which architectural layer. They quantify the problem but do not prescribe the solution.

Conversely, frameworks like NIST 800-53 and security architecture patterns like OSA's prescribe controls in detail but do not quantify the risk reduction each control delivers. They tell you what to build but not what it is worth.

The gap between these two worlds is where most security programmes lose coherence. The risk team produces a FAIR model showing 12 million euros of annualised information risk. The security architecture team implements controls from a compliance framework. Neither can prove that the controls actually reduce the modelled risk, because there is no shared data model connecting the two.

This is what OSA is uniquely positioned to solve.

Why OSA Is the Right Foundation

OSA already has the structured data that makes this bridge possible:

  • 43 security patterns with explicit threat models — not generic risk statements, but specific attack scenarios mapped to architectural controls
  • 191 NIST 800-53 Rev 5 controls with cross-framework mappings to 19 compliance frameworks
  • Maturity assessments that produce quantitative scores per control area, per pattern
  • Benchmark data showing how your scores compare to industry peers

The structure is there. Each pattern connects threats to controls. Each control maps to compliance frameworks. Each assessment produces measurable scores. What is missing is the financial layer — the translation from "your access control maturity is 2.5" to "your probable annual loss from credential-based attacks is 4-8 million, and improving to maturity 4 would reduce that by 60%."

That translation requires a risk quantification model wired into the same data graph. Threat scenarios need loss frequency distributions. Control areas need effectiveness curves. Assessment scores need to modulate vulnerability estimates. It is an engineering problem, not a research problem — the theory exists, the data exists, the question is connecting them.

What Is Coming

We are actively working on integrating quantified risk analysis into OSA's architecture. This is not a bolt-on feature. It is a fundamental extension of the data model that connects every pattern, control, threat, and assessment to a financial risk calculus.

The vision:

  • Risk-informed assessments: Complete a maturity assessment and see not just your gaps, but the estimated financial exposure each gap represents
  • Prioritised remediation: Rank control improvements by risk reduction per euro invested, not by arbitrary severity ratings
  • Board-ready quantification: Export a risk posture report that speaks the language of probable loss, not maturity scores
  • Scenario modelling: "What happens to our risk if we invest in zero trust architecture?" answered with a probability distribution, not a guess

We are fortunate to be working with people who have deep expertise in exactly this intersection — practitioners who have built quantified risk models for real organisations and understand both the FAIR methodology and the architectural control landscape. More on that soon.

Start With What You Can Measure Today

Quantified risk analysis does not require you to wait for the perfect model. It starts with understanding your current posture. If you have not already done so, run a maturity assessment against one of OSA's patterns. The scores you get today become the baseline for the quantified risk view tomorrow.

The patterns that lend themselves best to risk quantification are the ones with explicit threat models and clear control-to-threat mappings:

Your maturity scores, combined with the threat models in these patterns, are the raw material for quantified risk analysis. The better your baseline data, the more meaningful the risk quantification will be when it arrives.

The Bigger Picture

Security architecture and risk quantification have been separate disciplines for too long. Architects design controls without quantifying their risk reduction. Risk analysts model losses without prescribing architectural remediation. Auditors assess both without a shared data model to reconcile the two.

OSA's structured approach — patterns, controls, threats, assessments, benchmarks — was always heading toward this integration. The control graph is the skeleton. Risk quantification is the nervous system that makes it intelligent.

We are building toward a world where a security architect can design a pattern, a risk analyst can quantify its impact, and an auditor can verify its effectiveness — all using the same data, the same language, and the same platform.

That is the future of security architecture. And it starts with structured data.

Tobias Christen — Co-founder, Open Security Architecture