Tuesday, December 23, 2025

PROTECT: Authoritative Drivers for Threat Modeling

We’ve spent the last two articles talking about how to do threat modeling using frameworks like STRIDE and asking the right questions to find the holes in your logic. But now we need to answer the question that always comes up when you're asking for budget or time: Why do we actually have to do this?

If you look at the major security frameworks, you might notice something frustrating. Most of them don’t actually use the words "threat modeling." Because of that, teams often skip it, thinking it’s an optional "nice-to-have" engineering exercise.

That's a dangerous assumption.

Even when a standard doesn't explicitly mandate threat modeling, it almost always requires the results that only threat modeling can provide. Let’s look at where it’s hidden, where it’s expected, and where it’s absolutely mandatory.


Why most frameworks don't mandate "Threat Modeling"

It might seem weird that major security frameworks don't just come out and say "Thou Shalt Threat Model." But there are a few practical reasons for that:

  1. Scalability: Standards like ISO 27001 have to work for everyone, from a two-person startup to a Fortune 500 enterprise.
  2. Velocity: Technology changes fast. If a standard mandated a specific tool or methodology today, it’d be obsolete by next year.

Strict mandates usually just lead to "checkbox compliance." So instead of telling you how to do it, regulators focus on the outcome. They ask: "Have you identified your risks?" and "Can you prove your controls work?"


Figure 1: The relationship between Threats, Vulnerabilities, and Risks.

Level 1: Compliance Evidence (The "Implicit" Requirement)

This is where most of us live. You might be looking at ISO/IEC 27001, SOC 2, or GDPR and thinking, "I don't see the words threat modeling anywhere."

You're right, you won't. But look closer at what they do ask for.

  • ISO/IEC 27001 (Clause 6.1.2): Requires a systematic risk assessment process. How do you accurately identify risks without modeling threats?
  • SOC 2 (CC 3.1): Asks how you identify risks to your business objectives.
  • GDPR (Article 35): Requires a Data Protection Impact Assessment (DPIA) for high-risk processing. That is effectively a privacy-focused threat model.
  • PCI DSS v4.0: The latest version introduces the Targeted Risk Analysis (TRA) (Requirement 12.3). If you customize your controls, you must perform a specific risk analysis—essentially a targeted threat model—to prove your security is equivalent.

In all these cases, threat modeling is your best friend during an audit. When an auditor asks, "How did you decide you didn't need a firewall here?", you don't want to shrug. You want to pull out your threat model. It turns a subjective opinion into defensible evidence.

Level 2: Strong Guidance (The "Standard of Care")

Moving up the ladder, we enter the area of "strong guidance." These frameworks might not legally force you to do it, but they treat threat modeling as a standard duty of care. If you suffer a breach, you’re going to have a hard time explaining why you skipped this step.

  • NIST SP 800-218 (SSDF): The Secure Software Development Framework is becoming the gold standard for anyone selling software to the government. It explicitly lists threat modeling as a core practice for secure development.
  • NIST SP 800-161: This covers supply chain risk. It recommends threat analysis to figure out where your weak points are before a vendor compromises you.
  • OWASP ASVS: If you are building web apps, the Application Security Verification Standard pushes threat modeling as a hard requirement for achieving Level 2 or Level 3 assurance.

Level 3: Explicitly Required (The "Mandate")

Now the options disappear. If you are working in government or critical infrastructure, you don't really have a choice.

  • NIST SP 800-53 (Control RA-3): If you're dealing with federal systems, this is the bible. It explicitly requires a risk assessment that identifies threats and vulnerabilities. While it doesn't force a specific methodology, you cannot satisfy this requirement by guessing; you must document the analysis.
  • CIS Controls (v8): Control 11 specifically mandates a secure configuration process, which relies heavily on understanding your specific threat landscape to determine what needs locking down.

Level 4: Safety-Critical (The "Life or Death" Requirement)

Finally, we reach the highest stakes. In safety-critical industries like aviation or automotive, a hack doesn't just mean lost data; it means lost lives. Consequently, the standards here stop suggesting and start enforcing rigorous engineering processes.

  • Avionics (DO-326A / ED-202A): If you want to fly, this isn't a suggestion. It mandates a rigorous airworthiness security process.
  • Automotive (ISO/SAE 21434): This standard requires you to perform TARA (Threat Analysis and Risk Assessment). It’s not optional for modern vehicles.
  • Medical Devices (FDA Guidance): The FDA will reject your 510(k) submission if you haven't modeled threats to patient safety.

Bringing it all together

While threat modeling is an engineering activity, the PROTECT pipeline generates the evidence required for major compliance standards as an outcome of its execution. Each stage of your threat model answers a critical question likely to be asked during a NIST, ISO, or SOC 2 audit:

  • 1. VAST (The Lens): Defines your attack surface. (Audit Question: "Did you accurately define the scope of the assessment?")
  • 2. STRIDE (The Net): Captures the vulnerabilities. (Audit Question: "What methodology did you use to identify potential security failures?")
  • 3. DREAD (The Scale): Quantifies the risk. (Audit Question: "How did you prioritize which risks to accept versus mitigate?")
  • 4. LINDDUN (The Blindspot): Addresses privacy requirements. (Audit Question: "How are you meeting GDPR/CCPA data privacy obligations?")
  • 5. PASTA (The Fix): Validates the outcome. (Audit Question: "Can you demonstrate that your controls actually reduce the risk?")

So, what’s the bottom line? Threat modeling isn't just an engineering exercise. Threat modeling is your paper trail.

It proves you looked for the problems, analyzed the risks, and implemented countermeasures. In the world of compliance, if you didn't document it, you didn't do it.

The PROTECT framework introduced earlier in this series was never about compliance for its own sake. It was about making threat modeling effective, repeatable, and decision-oriented.

  • Article 1 showed how to do threat modeling comprehensively.
  • Article 2 showed what questions actually matter when doing it.
  • Article 3 explains why threat modeling remains one of the strongest tools for meeting real-world security, regulatory, and assurance expectations.

Threat modeling makes risk visible, decisions defensible, and security outcomes explainable when it matters most.


Monday, December 15, 2025

PROTECT: Engineering Field Guide for Threat Modeling

An interrogation framework for modern system design.

In practice, integrating multiple threat modeling frameworks reduces blind spots and rework by forcing earlier alignment between threats, impact, and controls. 

The result is stronger security outcomes, improved privacy posture, and better alignment with regulatory requirements.

Phase 1: VAST (The Attack Surface)

Focus: Topology, Boundaries, and Dependencies.

Mapping the Architecture

  • Boundary Analysis: Where does the data cross from a High-Trust zone (e.g., Private VPC) to a Low-Trust zone (e.g., Public Internet)? Is this explicitly drawn?
  • Actor Identification: Have we mapped every non-human actor? (e.g., Sidecars, lambda functions, cron jobs, CI/CD runners).
  • Dependency Graph: Which third-party libraries or external APIs are in the critical path? If npm package X is compromised, does the whole system fall?

Infrastructure & Scale

  • Scalability Bottlenecks: Identify the specific component (DB Write Master, Load Balancer) that will fail first under a DDoS condition.
  • Cloud Responsibility: For our PaaS/SaaS components, exactly where does the vendor's security stop and ours begin? (e.g., "AWS secures the cloud, we secure the S3 bucket config").

Phase 2: STRIDE (The Vulnerability Hunt)

Focus: Breaking the Logic.

Authentication (Spoofing)

  • Mechanism: How do we handle service-to-service auth? (e.g., mTLS, JWT, or static API keys?)
  • Identity Source: If the Identity Provider (IdP) goes down, what is the fail-open/fail-closed behavior?

Integrity & Input (Tampering)

  • Validation location: Do we validate input at the edge (WAF), at the controller (Code), or at the persistence layer (DB)? (Ideally all three).
  • Supply Chain: How do we verify that the container image deployed is the exact binary built by CI? (e.g., Image signing).

Observability (Repudiation)

  • Non-Repudiation: Can a rogue admin delete the audit logs that record their own actions?
  • Traceability: Do we have a correlation ID that tracks a request from the WAF all the way to the Database?

Confidentiality (Information Disclosure)

  • Secrets Management: Are secrets injected at runtime (Vault/Secrets Manager) or present in environment variables/code?
  • Data Leakage: Do error responses return stack traces, internal IP addresses, or version numbers to the client?

Availability (DoS)

  • Resource Starvation: Do we enforce rate limiting per-IP, per-user, or per-tenant?
  • Logic Bombs: Can a user upload a file that triggers recursive parsing (XML Bomb) or memory exhaustion?

Authorization (Elevation of Privilege)

  • Horizontal Escalation: Can User A access User B's resource by simply changing the ID in the URL (IDOR)?
  • Vertical Escalation: Does the API rely on the client to send its role (e.g., isAdmin=true), or is this validated server-side?

Phase 3: DREAD (The Risk Calculator)

Focus: Quantifying the Badness.

  • Damage: If this exploit lands, do we lose one user's session or the entire master database?
  • Reproducibility: Is this a "lab-only" theoretical exploit, or can it be scripted reliably?
  • Exploitability: Does the attacker need a supercomputer/insider access, or just curl?
  • Discoverability: Is the vulnerability broadcast in our HTTP headers, or hidden deep in compiled logic?

Phase 4: LINDDUN (The Privacy Engineer)

Focus: Data ethics and leakage.

  • Metadata Analysis: Even if the payload is encrypted, does the traffic pattern (size/timing) reveal user activity?
  • Data Minimization: Are we collecting fields we "might need later" (toxic assets) or only what is strictly required?
  • Unlinkability: If we combine Dataset A (Public) with our Anonymized Dataset B, can we re-identify users?

Phase 5: PASTA (The Reality Check)

Focus: Simulation & Resilience.

  • Kill Chain Validation: "If I am an attacker and I compromise the Web Server..."
    • ...Can I reach the Database? (Network Segmentation)
    • ...Can I read the keys? (IAM roles)
    • ...Will anyone notice? (Alerting)
  • Resilience: If the primary Region goes dark, is the failover automated or manual? Have we tested it?
  • Drift Detection: What prevents a developer from turning off the WAF tomorrow? (Infrastructure as Code / Policy as Code).

PROTECT: Integrating STRIDE, DREAD, LINDDUN, and PASTA for Threat Modeling

PROTECT: (P
rofile Review and Offensive Threat Evaluation for Countermeasures and Tactics)

The PROTECT framework acknowledges that no single methodology covers every aspect of modern security. Instead of choosing one, PROTECT orchestrates the industry's best specific-use models into a cohesive lifecycle. It leverages VAST for visibility, STRIDE for coverage, DREAD for prioritization, LINDDUN for privacy, and PASTA for defense.

PROTECT Threat Model Steps

1. Profile System and Assets (The Lens: VAST)

Objective: Visualize the architecture to establish scope.

  • The "Why": You cannot secure what you cannot understand. Before we can identify threats, we must have a clear, shared mental model of the system.
  • The Linkage: We use VAST (Visual, Agile, Simple) here not as a rigid checklist, but as the delivery mechanism. By creating a VAST-compliant process map, we generate the "Map" that the subsequent steps will hunt upon.

Key Actions:

  • Develop high-level architecture diagrams focusing on data flows, trust boundaries, and dependencies.
  • Profile threat actors (motivations, capabilities, resources).
  • Identify and prioritize critical assets based on business value.

2. Review Threats (The Net: STRIDE)

Objective: Achieve comprehensive threat coverage.

  • The Bridge (from Step 1): Once we have the VAST diagrams (the Map), we need a methodical way to sweep that map for vulnerabilities.
  • The Linkage: STRIDE acts as our "dragnet." It ensures we don't rely on gut feelings. We systematically apply STRIDE categories to every interaction and boundary identified in Step 1 to ensure we haven't missed a standard class of attack (like Spoofing or Tampering).

Key Actions:

  • Spoofing: Identify threats related to authentication and impersonation.
  • Tampering: Identify threats related to unauthorized modification of data or systems.
  • Repudiation: Identify threats related to the ability to deny actions or transactions.
  • Information Disclosure: Identify threats related to the unauthorized exposure of sensitive data.
  • Denial of Service: Identify threats related to the disruption or degradation of system availability.
  • Elevation of Privilege: Identify threats related to gaining unauthorized access or permissions.

3. Offensive Threat Impact Evaluation (The Scale: DREAD)

Objective: Filter noise and prioritize risk.

  • The Bridge (from Step 2): STRIDE is excellent at finding possible threats, but it doesn't tell us which ones matter. A STRIDE analysis often produces a massive, unprioritized list of "what-ifs."
  • The Linkage: We apply DREAD to the list generated by STRIDE to score them. This transforms a flat list of technical bugs into a ranked list of business risks. This is where we move from "Security Engineering" to "Risk Management."

Key Actions:

  • Damage: Assess the potential damage caused by the threat if it were to occur.
  • Reproducibility: Determine how easily the threat can be reproduced or exploited.
  • Exploitability: Evaluate the level of skill and resources required to exploit the threat.
  • Affected Users: Assess the number of users or systems that could be impacted by the threat.
  • Discoverability: Determine how easily the vulnerability or weakness can be discovered by potential attackers.

4. Evaluate Privacy Concerns (The Blindspot: LINDDUN)

Objective: Address non-security data risks.

  • The Bridge (from Step 3): Traditional security scoring (DREAD) focuses on broken systems. However, a system can be perfectly secure (unhackable) and still violate privacy laws (e.g., excessive data collection).
  • The Linkage: We pause the security workflow to run a specific LINDDUN pass. This captures the risks that STRIDE misses, specifically where the system functions exactly as designed, but that design harms the user's privacy (e.g., Unawareness or Linkability).

Key Actions:

  • Linkability: Determine if data from different sources can be combined to identify an individual or link their activities.
  • Identifiability: Assess if an individual can be singled out or identified within a dataset.
  • Non-repudiation: Evaluate if an individual can deny having performed an action or transaction.
  • Detectability: Determine if it is possible to detect that an item of interest exists within a system.
  • Disclosure of Information: Assess the risk of unauthorized access to or disclosure of sensitive information.
  • Unawareness: Evaluate if individuals are unaware of the data collection, processing, or sharing practices.
  • Non-compliance: Determine if the system or practices are not compliant with privacy laws, regulations, or policies.

5. Countermeasures and Tactical Safeguards (The Fix: PASTA)

Objective: Simulate attacks and validate defenses.

  • The Bridge (from Steps 3 & 4): We now have a prioritized list of Security risks (from DREAD) and Privacy risks (from LINDDUN). The final question is: Do our defenses actually work against a motivated human adversary?
  • The Linkage: We use the simulation strengths of PASTA (Process for Attack Simulation and Threat Analysis) here. While PASTA is a full lifecycle, PROTECT leverages its specific strength in Attacker-Centric simulation. We don't just patch vulnerabilities; we build attack trees to see if our proposed countermeasures actually break the attacker's kill chain.

Key Actions:

  • Attack Modeling: Simulate realistic attack scenarios and identify choke points.
  • Vulnerability Assessment: Conduct technical validation (pen-testing, code review) for high-risk vectors.
  • Countermeasure Analysis: Design countermeasures that address root causes. Map controls to regulatory requirements (PCI DSS, NIST 800-53, etc.).

PROTECT Summary

  • VAST draws the map.
  • STRIDE finds the holes in the map.
  • DREAD decides which holes are dangerous.
  • LINDDUN checks if the map exploits the user.
  • PASTA tests if the fences we build can actually stop the wolves.

The PROTECT model provides a comprehensive and integrated approach to threat modeling by combining the strengths of VAST, STRIDE, DREAD, LINDDUN, and PASTA into a unified framework.

Tuesday, December 9, 2025

Secure Software Delivery in Safety-Critical Systems

Why ASIL-D and DAL-A Now Require the Same Architecture

Introduction

Over the last several months, I’ve been working deeply with two industries that historically spoke different languages: automotive safety and aviation design assurance.

What surprised me: when you look at the engineering required for secure software delivery in their highest safety tiers, ASIL-D (Automotive Safety Integrity Level D) and DAL-A (Design Assurance Level A), the systems are effectively the same.

Standards bodies in both domains are now explicitly cross-referencing each other. This is a deliberate recognition of the rigor necessary for software-defined safety operating at fleet scale.

This post explains:

  • Why automotive and aviation converged
  • What the modern secure delivery architecture looks like
  • Which controls are identical vs. differently labeled
  • What each industry can learn from the other
  • Why this matters for certification, talent, and hiring

Why Convergence Happened

Historically, automotive and aviation had different assumptions:

Industry Past Assumption
Automotive Software is a performance feature bolted onto mechanical safety
Aviation Software supports but does not control physical flight mechanisms

Those assumptions collapsed:

  1. Software directly controls safety outcomes
    Brake-by-wire and fly-by-wire architectures made software a single point of failure.
  2. Long-lifecycle assets require secure updates
    Vehicles and aircraft must receive trustworthy updates for 15–20+ years.
  3. Regulators recognized updates as a persistent attack surface
    Cybersecurity is now inside the safety case.

As a result:

  • Automotive: UN R155/R156 made cybersecurity and update management mandatory for type approval.
  • Aviation: DO-326A/356A introduced cybersecurity artifacts into the certification basis.

Every software update must be cryptographically controlled, verifiable, and reversible without bricking fleets.

Standards Cross-Referencing

This convergence is codified:

  • ISO/SAE 21434 references aviation security concepts from DO-326A
  • DO-356A incorporates safety principles from ISO 26262
  • eVTOL certification guidance borrows from automotive OTA security practice

Different roots. Same alignment.

Requirements Are Effectively the Same

When aligned by engineering controls, the equivalence becomes obvious:

Concern Automotive (ASIL-D) Aviation (DAL-A)
Functional SafetyISO 26262DO-178C
CybersecurityISO/SAE 21434 + UN R155/R156DO-326A / DO-356A + DO-355A
Digital Signature Algorithmse.g., RSA-3072 / ECDSA P-384ARINC 835 (same families)
Private Key ProtectionHSM (FIPS 140-2/3)HSM (FIPS 140-2/3) + offline custody
RevocationCRL or OCSPCRL or OCSP
Rollback ProtectionMonotonic counterMonotonic counter

Different paperwork, same controls.
The cryptographic trust model is shared.

The Modern Secure Delivery Architecture

Here's an example architecture that meets both ASIL-D and DAL-A expectations for secure software delivery.

Figure 1: PKI-based secure delivery architecture supporting ASIL-D and DAL-A compliance.

At a high level:

  • An offline Root CA anchors trust, with tightly controlled ceremonies
  • An HSM-protected Code-Signing CA issues signatures on release artifacts
  • A revocation service distributes CRLs/OCSP responses
  • Updates are distributed over UPTANE, ARINC 615A/827, or equivalent secure loaders
  • The target ECU/LRU (Electronic Control Unit / Line Replaceable Unit):
    • Verifies signature against a burned-in root public key
    • Checks revocation status
    • Enforces anti-rollback via monotonic counters
    • Uses atomic install with dual-bank fallback
    • Enforces secure boot on every power cycle

Every update package is treated as hostile until proven otherwise.

UPTANE dominates automotive OTA distribution. Aviation reaches the same controls via ARINC loaders and DO-326A artifacts. These are different implementation paths with the same trust requirements.

What Each Industry Gets Right

Aviation strengths

  • Rigor in independent verification (no self-approval)
  • Hardware-enforced rollback counters per critical module
  • Zero tolerance for dead code in certified builds

Automotive strengths

  • Mature SBOM workflows (CycloneDX / in-toto)
  • Proven million-unit OTA rollout practices
  • Faster iteration in cybersecurity management systems (CSMS)

Risk-Based Assurance: A Shared Language

LevelAutomotiveAviationFailure Condition
4ASIL-DDAL-AMultiple fatalities, loss of vehicle/aircraft
3ASIL-CDAL-BSingle fatality / severe injury
2ASIL-BDAL-CMission abort / serious injury
1ASIL-ADAL-DMinor injury / inconvenience
0QMDAL-ENo safety effect

Certification Implications

If your secure delivery process is already approved for ASIL-D + ISO/SAE 21434 + UN R155 compliance, or DAL-A + DO-326A + ARINC 835 compliance — then you are very close to certification in the other domain.

Benefits of convergence:

  • Faster multi-market productization
  • Shared platform for PKI, SBOM, and secure boot
  • Consolidated supplier requirements
  • Broader talent mobility

The architecture is the same.
The talent pipeline is not.

Standards Ecosystem Alignment

Figure 3: Explicit cross-referencing across automotive and aviation security and safety standards.

This map shows, at a glance:

  • ISO 26262 and DO-178C as the functional safety backbone
  • ISO/SAE 21434, UN R155/R156, DO-326A, DO-356A, and DO-355A framing cybersecurity
  • Cross-reference arrows where one standard family borrows from or references the other

Conclusion

Automotive is becoming more like aviation — safety-critical actuators everywhere.
Aviation is becoming more like automotive — connected fleets with continuous updates.

The industry has already created common roots in a similar architecture.

If you are designing or certifying secure update pipelines in either domain and want to sanity-check your approach against both ecosystems, I’m always open to a conversation.

Where else are you seeing this convergence?

  • Medical: IEC 62304 + IEC 80001-1 + FDA cyber guidance
  • ICS: IEC 62443 + IEC 61508/61511
  • Rail: EN 50128 + TS 50701

Secure, cryptographically controlled updates are becoming universal in safety-critical systems.

Connect on LinkedIn

Wednesday, December 3, 2025

"Model Memory" attacks. I was wrong.

I thought AI "Model Memory" (whether the model or the logs) was just security FUD. Wasn't that solved already? Sure, sensitive information is involved - but please tell me it's not accessible… Right??

I’ve been knee-deep in AI retention stats lately, and one concept kept nagging at me: the idea that "model memory," such as retained prompts, chat histories, or session context, is quietly killing projects.

I’ve now reviewed several AI rollouts this year, and I’ve never personally seen an issue with it. It always felt like a keynote stump-the-chump quip: "What if the model remembers your SSN?"

So I went digging. Turns out, the problem isn't "Skynet never forgets" and the borg is after you. It's less interesting because it's boring, messy human error. But the damage is real.

Here's 4 times in 2025 where retention triggered bans, patches, or headlines.

1. DeepSeek's Exposed Chat Logs (Jan 2025)

DeepSeek AI platform exposed user data through unsecured database | SC Media

This startup left a ClickHouse database open. No "model regurgitation," but millions of plaintext chat contexts (PII + keys) were exposed.

👉 The Cost: "Security experts noted that such an oversight suggests DeepSeek lacks the maturity to handle sensitive data securely. The discovery raises concerns as DeepSeek gains global traction [...] prompting scrutiny from regulators and governments."

2. Microsoft 365 Copilot's EchoLeak (June 2025)

Inside CVE-2025-32711 (EchoLeak): Prompt injection meets AI exfiltration

A zero-click vulnerability allowed attackers to hijack the retrieval engine. The model’s working memory blended malicious input with user privileges, leaking docs without a single click. This resulted in CVE-2025-32711 (EchoLeak).

👉 The Cost: Shines a spotlight on Copilot’s prompt parsing behavior. In short, when the AI is asked to summarize, analyze, or respond to a document, it doesn’t just look at user-facing text. It reads everything, including hidden text, speaker notes, and metadata.

3. OmniGPT's Mega Chat Dump (Feb 2025)

OmniGPT Claimed To Be Subjected to Extensive Breach | MSSP Alert

Hacker "Gloomer" dumped 34M lines of user chats. The "memory" here was full conversation history stored for personalization, including financial queries and therapy-like vents.

👉 The Cost: 62% of AI teams now fear retention more than hallucination (McKinsey).

4. Cursor IDE's MCPoison (July 2025)

Cursor IDE's MCP Vulnerability - Check Point Research

A trust flaw in the Model Context Protocol allowed attackers to swap configs post-approval.6 Once in the session "memory," it could execute commands on every launch.

👉 The Cost: Devs at NVIDIA and Uber had to update frantically to close the backdoor.

The Bottom Line:

These aren't daily fires, but they are real issues that still need to be addressed. You may have seen my current discussions on storing hashes instead of payloads. There are multiple reasons behind this such as privacy, security, storage requirements, network requirements, processing, validation speed, etc. But it also helps reduce your attack surface for issues like this.

Of course, it's only one layer of defense. Data retention is still here. Good teams "receipt-ify" (store hashes, not payloads) and also enforce purges of sensitive information.

One slip, and you're the next headline. Define and enforce good hygiene.  

Stop Saying “Compliance ≠ Security.” You’re Missing the Point.

“Compliance is just theater. Checkboxes don’t equal security.” I'm sorry, but this is just wrong.

Why? Because every authoritative framework worth pursuing has mandated risk management. It’s not hidden in an appendix.

Every Major Framework Explicitly Requires Risk Management

  • SOC 2 (Trust Services Criteria): CC3.0 Risk Assessment and CC9.0 Risk Mitigation are mandatory common criteria. No documented risk assessment = automatic failure.
  • NIST SP 800-53 Rev 5: The entire RA and PM families require an organization-wide risk management framework. Controls are then tailored.
  • PCI DSS v4.0 Requirement 12.2 mandates a formal annual risk assessment. Requirement 12.3.1 introduces targeted risk analysis to justify control frequency.
  • ISO 27001/27002 (2022) Clause 6.1.3 and control 8.2 require you to establish, implement, maintain, and continually improve a risk management process.
  • And on it goes.

These aren't suggestions. Risk Management is a mandatory exercise.

You are required to exceed minimums when your risk demands it. The frameworks explicitly say the baseline is a floor, not your high-water mark.

What About Breaches?

When organizations are technically “compliant” and still get breached, the failure is almost always tied to a nonexistent or terribly executed risk management program.

Mature programs:

  • Start with the required risk assessment, then select and tailor controls
  • Apply stricter measures to high-risk/crown-jewel assets, lighter ones elsewhere
  • Exceed minimums where their own risk analysis justifies it
  • Continuously reassess because every framework demands it

That’s not “checkbox compliance.” That’s literally what the standards require.

So next time you’re tempted to say “compliance doesn’t equal security,” I'm curious to see your last risk assessment that actually drove control selection.

Because if your takeaway from reading SOC 2, NIST, PCI DSS, ISO, etc. is “just a checklist,” the problem might not be the framework...