Wednesday, August 27, 2025

Preview: NCCoE Secure DevSecOps Practices - NIST SP 1800-44A

SourceSecure Software Development, Security, and Operations (DevSecOps) Practices


The National Cybersecurity Center of Excellence (NCCoE) has released an Initial Public Draft outlining their planned guide on Development, Security, and Operations (DevSecOps) practices. This draft represents their vision for helping organizations integrate security throughout their software development lifecycle.

Planned Key Components:

  • Will provide a notional reference model for implementing DevSecOps practices
  • Intends to emphasize zero trust security architecture integration
  • Plans to offer practical methodology for organizations seeking to enhance their software security posture
  • Being developed by NIST's National Cybersecurity Center of Excellence as part of their ongoing cybersecurity initiatives

Target Audience: IT professionals, security teams, software developers, and organizational leadership responsible for secure software development practices.

Expected Outcomes: The final document aims to outline actionable steps for organizations to begin implementing or improving their DevSecOps capabilities. This may be a future helpful resource for both beginners and those looking to mature their existing practices. It's tied to other initiatives from EOs and has some industry momentum that might not be apparent. It's a rising frustration that I think will see more attention over the next year. 

Monday, August 18, 2025

Meta-analysis of 28 AI Security Frameworks and Guidelines

28 AI Security Frameworks and Guidelines

Meta-takeaway: 28 AI Security Frameworks and Guidelines

Doing a quick dive into this... Look at the table and think about what stands out. 

What do you see?

Download it here: 

https://github.com/davischr2/Cloud-Documents

Here are my quick observations. This table reads less like a body of original work and more like a crowd of institutions trying not to be left out of the AI moment. The motivation mix is clear:

  • Fear of being blamed (if AI causes harm)
  • Fear of being left behind (if others set the norms)
  • Fear of losing control (if AI develops outside institutional guardrails)

And yet, in that swirl, you can see a few truly new constructs emerging. Consider adversarial threat taxonomies, LLM-specific risks, and the engineering of assurance. That’s where the real substance is.

1. Everyone wants a piece of the steering wheel

  • Multiplicity of bodies: Governments (EU, US, G7, UN), standards organizations (ISO, NIST), regulators (CISA, ENISA), industry (CSA, OWASP), and even loose communities are all publishing.
  • This signals that no one trusts any single authority to “own” AI governance. Everyone wants to shape it to their jurisdiction, sector, or constituency. People like control.
  • The table almost reads like a map of regulatory turf-staking.

2. Fear is driving much of the activity

  • You see the fingerprints of fear of harm everywhere: prohibited practices in the EU AI Act, adversarial threats in MITRE ATLAS, “best practices” for AI data from CISA.
  • Even the voluntary guidelines (e.g., OWASP LLM Top 10, CSA AI Safety) are mitigations against anticipated misuse.
  • These aren’t aspirational visions of AI’s potential. They’re largely defensive measures. These are a kind of collective bracing for impact. It's not that I think they are wrong... They just don't trust others to do it correctly.

3. Originality is thin and echo chambers dominate

  • Many of the documents are cross-referential: NIST AI RMF becomes the anchor, ISO drafts map to it, ENISA echoes both, and EU AI Act leans on “harmonized standards” that are largely ISO/NIST influenced.
  • The “new” work is often reinterpretation of old risk frameworks (ISO management systems, NIST RMF, 27k family) with “AI” pasted on.
  • Genuine innovation is scarcer, but you can see it in things like MITRE ATLAS (a fresh threat taxonomy) and OWASP LLM Top 10 (concrete new risks like prompt injection).

4. Regulation vs. implementation gap

  • Regulation-heavy side: EU AI Act, MITRE policy memos, UNESCO/OECD ethics are all conceptual or legal. This is fine except...
  • Implementation-light side: Only a few (e.g., CISA/NCSC secure AI dev, OWASP Testing Guide) actually tell engineers how to build or defend systems.
  • This leaves a vacuum: rules are being written faster than usable security engineering practices.

5. Globalization meets fragmentation

  • UNESCO, OECD, G7, INASI push for global harmonization.
  • But EU AI Act, CISA guidelines, UK Standards Hub all point to regional fragmentation.
  • Companies face a world where AI is “global by design, regulated local by law.” The burden is harmonizing across conflicting signals.

6. Cultural subtext: fear of “black box” systems

  • Most guidance (NIST RMF, ISO 42001, AI Act) centers around transparency, accountability, oversight.
  • That’s really a way of saying: “we don’t trust opaque algorithms.”
  • The core anxiety isn’t just data misuse... it’s losing human agency and visibility when decisions migrate into AI.

7. The rise of “assurance” as a currency

  • Assurance shows up repeatedly (MITRE AI Assurance, ISO/IEC 25059, CISA guidelines).
  • It suggests the world is shifting from just “secure design” to provable, auditable trustworthiness. This is what regulators, auditors, and customers want so that they can independently verify trust.

8. Early signs of standardization fatigue

  • There’s a lot of duplication. NIST, ENISA, ISO, CSA, OWASP are all publishing lists of “controls” and “practices.”
  • This could create compliance theater: organizations checking boxes against multiple overlapping frameworks without materially improving AI security.
  • The challenge will be convergence vs. chaos.

Included AI Security Frameworks and Guidelines:

  • CISA (Best Practices Guide for Securing AI Data, Guidelines for Secure AI System Development)
  • Cloud Security Alliance (CSA) (AI Safety Initiative)
  • ENISA (Multilayer Framework for Good Cybersecurity Practices for AI, Cybersecurity of AI and Standardisation)
  • European Union / European Commission (EU AI Act, Guidelines on Prohibited AI Practices)
  • G7 (Hiroshima Process) (International Code of Conduct for Organizations Developing Advanced AI Systems)
  • International Network of AI Safety Institutes (INASI) (International Network of AI Safety Institutes)
  • ISO/IEC (ISO/IEC 42001:2023, ISO/IEC DIS 27090, ISO/IEC 25059:2023)
  • MITRE (MITRE ATLAS, A Sensible Regulatory Framework for AI Security, Assuring AI Security & Safety through AI Regulation, AI Assurance Guide)
  • NIST (AI Risk Management Framework 1.0, Generative AI Profile, Trustworthy & Responsible AI [AIRC hub])
  • OECD (Recommendation of the Council on Artificial Intelligence)
  • OWASP (AI Security & Privacy Guide, Top 10 for LLM Applications, AI Exchange, AI Testing Guide, Securing Agentic Applications Guide 1.0)
  • UK AI Standards Hub (BSI/NPL/Turing) (AI Standards Hub)
  • UNESCO (Recommendation on the Ethics of Artificial Intelligence)
  • United Nations / ITU (Guide to Developing a National Cybersecurity Strategy)

Thursday, August 14, 2025

Tradeoffs to Consider: Serving Model Predictions

Credit to Santiago over at ML.School with his course Building AI/ML Systems That Don't Suck for thinking through and sharing this image during his course. 

This is similar to other tradeoffs in business where it's important to shape, temper, and communicate your expectations. What are your priorities? How do you solve the business problem while juggling the quality, speed, and cost of the output? 

Spend the time upfront defining the business problem and the ideal solution. 

The image is self-explanatory.