Friday, March 15, 2024

Integrated Threat Modeling: VAST, STRIDE, DREAD, LINDDUN, and PASTA

Recently, during an Interview, I was asked about threat modeling. I've been in and around threat modeling for a few decades, identifying and prioritizing risks based on quantitative and qualitative data. It's germane to the principles of Information Security, Assurance, and Trust. The different shifting focus areas over time may require updated approaches, but the objective remains. Find and prioritize threats for risk mitigation based on our risk threshold prior to an exposure. 

Towards that end - I thought it'd be interesting to engage Claude's Opus model in a conversation about a few different approaches. There were several outputs I liked with a little tweaking. Below is just one example that includes VAST, STRIDE, DREAD, LINDDUN, and PASTA. 

Now - Carefully - The output below has some duplicity and can be further refined - a lot - for efficient workflow execution. This demonstrates the overlap and use of each of these models.

Example Integrated Threat Modeling Process:

Define Objectives and Scope:

  • Establish the goals and objectives of the threat modeling exercise.
  • Determine the scope of the assessment, including the systems, applications, and business units involved.

Identify Assets:

  • Identify the critical assets within the defined scope that require protection.
  • Prioritize the assets based on their value and importance to the organization.

Create Architecture Overview. This incorporates the core principle of the VAST model (Visual, Agile, Simple Threat modeling):

  • Develop a high-level architecture diagram of the system, focusing on key components, data flows, and trust boundaries.
  • Ensure the diagram is simple, visual, and easy to understand for all stakeholders.

Identify Threat Actors:

  • Identify potential threat actors who may have a vested interest in attacking the system.
  • Consider both internal and external threat actors, such as malicious insiders, cybercriminals, nation-state actors, and competitors.
  • Assess the motivations, capabilities, and resources of each threat actor.

Decompose Application and Identify Threats:

  • Break down the application into smaller, manageable components and identify trust boundaries and interactions between the components.
  • Identify potential threats for each component and interaction using the STRIDE model (Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege) to:
    • Spoofing: Identify threats related to authentication and impersonation.
    • Tampering: Identify threats related to unauthorized modification of data or systems.
    • Repudiation: Identify threats related to the ability to deny actions or transactions.
    • Information Disclosure: Identify threats related to the unauthorized exposure of sensitive data.
    • Denial of Service: Identify threats related to the disruption or degradation of system availability.
    • Elevation of Privilege: Identify threats related to gaining unauthorized access or permissions.
  • Utilize attack trees, threat intelligence, and vulnerability data to assist in threat identification.

Analyze Threats and Vulnerabilities:

  • Assess the likelihood and potential impact of each identified threat using the DREAD model (Damage, Reproducibility, Exploitability, Affected Users, Discoverability):
    • Damage: Assess the potential damage caused by the threat if it were to occur.
    • Reproducibility: Determine how easily the threat can be reproduced or exploited.
    • Exploitability: Evaluate the level of skill and resources required to exploit the threat.
    • Affected Users: Assess the number of users or systems that could be impacted by the threat.
    • Discoverability: Determine how easily the vulnerability or weakness can be discovered by potential attackers.
  • Identify potential privacy threats using the LINDDUN model (Linkability, Identifiability, Non-repudiation, Detectability, Disclosure of Information, Unawareness, Non-compliance):
    • Linkability: Determine if data from different sources can be combined to identify an individual or link their activities.
    • Identifiability: Assess if an individual can be singled out or identified within a dataset.
    • Non-repudiation: Evaluate if an individual can deny having performed an action or transaction.
    • Detectability: Determine if it is possible to detect that an item of interest exists within a system.
    • Disclosure of Information: Assess the risk of unauthorized access to or disclosure of sensitive information.
    • Unawareness: Evaluate if individuals are unaware of the data collection, processing, or sharing practices.
    • Non-compliance: Determine if the system or practices are not compliant with privacy laws, regulations, or policies.
  • Conduct vulnerability and weakness analysis using scanning tools, penetration testing, and code review techniques.

Perform Attack Modeling:

  • Create and review attack models using the PASTA model (Process for Attack Simulation and Threat Analysis) methodology to:
    • Define Objectives: Establish the objectives and scope of the attack modeling exercise.
    • Define Technical Scope: Identify the key components, data flows, and trust boundaries of the system.
    • Application Decomposition: Break down the application into smaller, manageable components.
    • Threat Analysis: Identify and analyze potential threats using attack trees, threat intelligence, and vulnerability data.
    • Vulnerability & Weaknesses Analysis: Assess the system for vulnerabilities and weaknesses that could be exploited.
    • Attack Modeling: Simulate potential attack scenarios to determine the likelihood and impact of each threat.
    • Risk & Impact Analysis: Evaluate the risk and potential impact of each identified threat.
    • Countermeasure Analysis: Develop and recommend countermeasures to mitigate the identified risks.
  • Analyze the feasibility of each attack scenario.

Evaluate Risk and Impact:

  • Assess the overall risk posture of the system based on the identified threats, vulnerabilities, and attack models.
  • Determine the potential impact of each risk on the organization's business objectives and operations.

Decide on Countermeasures:

  • Develop and recommend countermeasures to mitigate the identified risks.
  • Consider the effectiveness, feasibility, and cost of each countermeasure.
  • Prioritize the implementation of countermeasures based on the risk level and available resources.

Validate and Iterate:

  • Review the threat model with stakeholders and subject matter experts.
  • Validate the assumptions made during the modeling process and update the model as necessary.
  • Iterate the threat modeling process regularly to account for changes in the system, new threats, and emerging vulnerabilities.

Communicate and Educate:

  • Communicate the results of the threat modeling exercise to relevant stakeholders, including management, development teams, and security personnel.
  • Provide training and awareness sessions to ensure that all stakeholders understand their roles and responsibilities in mitigating the identified risks.

Implement and Monitor:

  • Implement the selected countermeasures and integrate them into the system development lifecycle.
  • Establish monitoring and logging mechanisms to detect and respond to potential security incidents.
  • Regularly review and update the threat model and countermeasures based on changes in the system and the evolving threat landscape.

AI Revolution: Smarter Development, Stronger Security

The cloud computing landscape is experiencing a seismic shift driven by the exponential integration of Artificial Intelligence (AI). Recent advancements like Cognition AI's Devin and Microsoft's Copilot for Security showcase AI's potential to revolutionize software development and cybersecurity. 

Pay attention to the oncoming freight train of changes quickly coming to the computing world. 

Examples include: 

  • AI Agents working in teams: Effectively. Perfect? No. Getting better? Quickly. E.g. Cognition AI's Devin
  • Topically Focused LLMs, LAMs, etc.: E.g. Microsoft's Copilot for Security. Or any topic you can think about. I developed one to teach social skills to a teenage girl in less than an hour.
  • UI Navigation: Think about how the future of natural language query looks when I don't have to be an expert on the system UI anymore. 
  • Happening for the last year: The knowledge barrier to entry for many tasks continue to drop. Used to be There's an app for that... Now... There's an AI for that.

AI Agents: The Realistic Future of Development

Cognition AI's Devin is a groundbreaking AI agent that plans and executes software projects with minimal human input. Operating autonomously in a sandbox environment, Devin learns from experience, rectifies mistakes, and utilizes tools like code editors and web browsers. Devin isn't meant to replace engineers, but rather augment them, freeing human talent for more complex tasks and ambitious goals.

Imagine AI agents like Devin seamlessly integrated into cloud environments. This could significantly enhance development efficiency and scalability. AI can automate routine tasks, assist in code development, optimize resource allocation, and improve system performance, all while reducing costs and development times. Furthermore, these AI collaborators can provide real-time insights, identify potential issues, and suggest improvements, fostering a truly collaborative approach to cloud-based software development.

AI-Powered Security: Every. Single. Tool.

Microsoft's Copilot for Security highlights the growing role of AI in tackling cloud security challenges. This AI-powered chatbot, leveraging OpenAI's GPT-4 and Microsoft's security expertise, assists security professionals in identifying and defending against threats. Copilot for Security will utilize the 78 trillion signals collected by Microsoft’s threat intelligence. Copilot provides real-time security updates, facilitates collaboration among teams, and even answers questions in natural language.

Integrating AI chatbots like Copilot into the cloud security landscape can significantly enhance threat detection and response. By analyzing code and files, providing real-time updates on security incidents, and enabling natural language queries, AI helps organizations stay ahead of threats and respond more effectively to cyberattacks. Additionally, AI chatbots lowers the barrier to knowledge sharing and breaks down silos, fostering a more coordinated approach to cloud cybersecurity.

Scalability, Flexibility, and the Cloud's Future

The growing demand for adaptable AI solutions is reflected in Microsoft's pay-as-you-go pricing model for Copilot for Security. As AI becomes more embedded in cloud computing, expect to see more consumption-based pricing models, making AI-powered services accessible to businesses of all sizes.

The convergence of AI and cloud computing promises to drive innovation across industries. AI-driven automation and collaboration will be cornerstones of future cloud computing, enhancing efficiency, security, and scalability. As AI agents and chatbots like Devin and Copilot evolve, we can expect a future where AI seamlessly collaborates with human professionals, unlocking new opportunities for success in the cloud era. 

Embracing the Future: Be prepared. Be proactive. 

The introduction of Devin and Copilot for Security exemplifies AI's transformative impact on cloud-based development and security. By embracing AI-driven automation and collaboration, cloud providers and organizations can position themselves at the forefront of this revolution, driving innovation, efficiency, and security. As AI continues to shape the future of cloud computing, businesses that adapt and harness these technologies will be best equipped. Be prepared. Be proactive. 

Tuesday, March 12, 2024

Staying Current - Relevant - Continuous Learning

Protect your organization. Cybersecurity is a dynamic field where new threats, vulnerabilities, and technologies change, evolve, and emerge. Commit to continuous learning and skill development. Stay informed about the latest security trends, best practices, and tools. 

Resources for Staying Current:

Quick note! There's a lot here. I'd love to read all of this every day. It doesn't happen, and I have my favorites depending on my current role. The resources necessary for your success vary widely. For example, these here focus on security resources. They only scratch the surface of what's available, and they completely gloss over privacy, regulatory compliance, information governance, legal issues, etc.. 

Vendor-Specific Security Advisories:

Stay informed about security updates and patches from major technology companies.

Government and Non-Profit Security Organizations:

Follow updates from organizations for authoritative guidance and best practices.

Cybersecurity News and Blogs:

Stay informed about the latest security incidents, trends, and analysis through popular blogs and news sites.

Security Mailing Lists & Vulnerability Databases:

Subscribe to mailing lists to receive timely information about new vulnerabilities and exploits. You can do this to regularly check vulnerability databases to stay informed about newly discovered vulnerabilities and their potential impact. 

Security Conferences:

Attend conferences to learn from industry experts, network with peers, and stay updated on the latest research and trends. Also check out the YouTube channels for each of these to see what talks have been recently published.

Online Security Communities:

Engage with online communities to learn from others, ask questions, and contribute to discussions.

Again, this isn't a complete, all-inclusive list of resources. Not even close. The objective is to provide exposure to options and importance. Other media I find to be helpful includes YouTube, Claude and other AI chat, and audiobooks. 

Continuous learning is essential. Make the choice to stay current, relevant, and effective. Yes, it's hard sometimes. It takes intentionality - and a little goes a long way. You can do it...! There are many, many more than just these sources. The purpose is to develop a comprehensive approach to continuous learning that combines staying informed about the latest security news, following best practices, and engaging with the cybersecurity community. 

Thursday, February 15, 2024

Ransomware Maturity and Kill Chains

Creating a maturity model to measure the effectiveness of remediations in preparing for and addressing common ransomware attacks involves developing a framework that assesses an organization's cybersecurity capabilities and readiness across multiple dimensions. This model typically ranges from initial (least mature) to optimized (most mature) stages, providing a path for continuous improvement. Here's how to create such a maturity model:

1. Define the Maturity Levels to be Used for Measurements (e.g.):

  • Initial (Level 1): Basic processes are ad-hoc, and ransomware preparedness is minimal or non-existent.
  • Developing (Level 2): Awareness of ransomware threats exists, with some informal processes and basic defensive measures in place.
  • Defined (Level 3): Formal processes and policies are established, with proactive measures to prevent ransomware attacks.
  • Managed (Level 4): Advanced defensive measures and continuous monitoring are in place, with a focus on managing and mitigating ransomware threats.
  • Optimized (Level 5): The organization continuously improves its ransomware defense mechanisms, leveraging advanced analytics, machine learning, and threat intelligence for predictive defense.

2. Identify Key Domains for Assessment

Break down the organization's cybersecurity posture into key domains. Take a quick look at the table above and you'll see common expectations such as:

  • Threat Intelligence
  • Identity and Access Management
  • Endpoint Protection
  • Network Security
  • Incident Response and Recovery
  • User Training and Awareness

3. Develop Assessment Criteria for Each Domain

For each domain, define specific criteria that measure the organization's maturity. These criteria should cover the processes, technologies, and practices relevant to preventing and responding to ransomware attacks. Criteria can include the effectiveness of backup and recovery strategies, the extent of employee training programs, the implementation of EDR solutions, etc.

4. Establish Metrics and Indicators

Define quantitative and qualitative metrics for evaluating maturity in each domain. Metrics could include the frequency of security audits, the speed of patch deployment, the number of successful phishing simulations, or the recovery time after an incident.

5. Conduct Assessments

Perform regular assessments against the maturity model to determine the current level of preparedness and effectiveness of ransomware remediation efforts. This involves gathering data through audits, interviews, and technical testing.

6. Analyze Results and Identify Gaps

Analyze the assessment results to identify gaps in the organization's ransomware preparedness. Compare current practices against the defined maturity levels to determine areas for improvement.

7. Develop Improvement Plans

Based on the gaps identified, develop targeted improvement plans for each domain. Plans should include short-term and long-term initiatives to enhance the organization's resilience against ransomware.

8. Implement, Monitor, and Review

Implement the improvement plans, continuously monitor their effectiveness, and review the organization's progress towards higher maturity levels. Adjust strategies as necessary based on evolving threats and business objectives.

9. Stakeholder Communication

Regularly communicate progress, risks, and achievements to stakeholders, including executive management, to ensure continued support and alignment with the organization's overall risk management strategy.

10. Continuous Improvement

Incorporate lessons learned from assessments, incidents, and industry developments into the maturity model. Continuously refine the model to address new ransomware tactics and techniques.

Creating a maturity model for ransomware preparedness is an iterative process that helps organizations systematically improve their cybersecurity posture, reduce their vulnerability to attacks, and enhance their ability to respond to and recover from incidents. 

Sunday, November 5, 2023

IT Audit Process - Review

Be careful! This flowchart isn't meant to be perfect. Audit isn't a one-size-fits-all answer. It's meant to illustrate an example process we followed when I was in the role of auditor. Several things might be very, very different from what I experienced. 

One notable exception to the norm is that the audit team purposefully and proactively created excellent working relationships with operations teams. The extra meetings and check-ins were built into our culture from the top-down. The result? Collaboration and transparency. Business units and data centers came to us with problems. We became an avenue for them to get funding. They benefited from working with us and we made sure it was as positive an experience as possible. A dream? Yes. However, this practice also resulted in attracting top talent from the operations teams asking to rotate through audit for the exposure to the business. The cross-functional experience benefited everyone and created a respected organization with truly excellent outcomes.

Thursday, October 19, 2023

Navigating the Waters of Change

I've lived through a lot of change. I know everything will work out despite the challenge of the uncertainty. Learn to struggle and fight well. Stay positive.

This information was gathered to help people at VMware as many prepare for the pending Broadcom acquisition by Lorelei Ghanizadeh Voorsanger | LinkedIn. Thank you, Lorelei, for sharing with so many others that need this information. You are a blessing. And thank you to the hundreds of people that have reached out to me and my peers with resources and help. You are appreciated.VMware peers, you are seen. Reach out to me. I have a lot of resources that may be helpful.

Update Your LinkedIn Profile

Invest in your branding. Review and optimize your LinkedIn profile. 

Improve Your Resume 

Pro Tip: Make sure you resume is ATS compliant. Most companies use Applicant Tracking Systems (ATS) to process your resume. These systems cause qualified candidates to slip through the cracks. 

 Cover Letter Writing Resources

The AWS Cloud Resume Challenge

Manage the Job Search 

Application Tracker

Job Search Sites & Placement services – US-centric

Wednesday, September 20, 2023

PCI DSS Vulnerability Scanning and Penetration Testing Hygiene

Image: Dalle3
The Payment Card Industry Data Security Standard (PCI DSS) is an essential benchmark for businesses that store, process, or transmit cardholder data. The introduction of PCI DSS v4.0 brings several clarifications and new layers of complexity. Today, we’ll take a look at internal and external vulnerability scans and penetration testing. 

Vulnerability Scans (11.3.1.3 and 11.3.2.1)

One of the critical security controls that PCI DSS v4.0 emphasizes is the need for internal vulnerability scans. Companies must perform these scans after any 'significant change,' as defined by the standard. Significant changes include things like adding new hardware, software, or making considerable upgrades to existing infrastructure.

The scans aim to detect and resolve high-risk and critical vulnerabilities based on the entity’s vulnerability risk rankings. Following the scan, any detected vulnerabilities must be resolved, and rescans should be conducted as needed.

External vulnerability scans are equally important and follow the same triggering mechanism—significant changes in the environment. Here, the focus is on resolving vulnerabilities scored 4.0 or higher by the Common Vulnerability Scoring System (CVSS). As with internal scans, rescans are required as necessary to confirm that vulnerabilities have been adequately addressed.

Penetration Testing (11.4.2, 11.4.3)

Internal penetration testing is a more aggressive form of evaluation and should be conducted at least once every 12 months or after any significant change to the infrastructure or application. The testing can be carried out either by a qualified internal resource or a qualified external third-party, provided that there is organizational independence between the tester and the entity being tested. Notably, the tester doesn't need to be a Qualified Security Assessor (QSA) or an Approved Scanning Vendor (ASV).

Much like its internal counterpart, external penetration testing is required annually or after any significant alterations to the system. The testing must also be conducted by qualified resources and should follow the entity’s defined methodology for testing.

What Constitutes a 'Significant Change'?

PCI DSS v4.0 is pretty broad in what it considers to be 'significant changes,' effectively encompassing any new hardware, software, or networking equipment added to the Cardholder Data Environment (CDE), as well as any replacement or major upgrades to existing hardware and software in the CDE. The list is exhaustive and is aimed at ensuring that any changes, no matter how seemingly minor, are given adequate attention from a security perspective.

Summary of Requirements

The PCI DSS v4.0 requirements for vulnerability scans and penetration testing provide a structured approach for entities to keep their data environments secure. While these requirements might seem stringent, they offer a well-defined framework for securing cardholder data against the backdrop of ever-advancing cyber threats. Adhering to these requirements is not just about ticking compliance boxes; it’s about taking the necessary steps to protect your organization and its stakeholders.

  • Internal vulnerability scans:
    • 11.3.1.3 Internal vulnerability scans are performed after any significant change as follows:
      • High-risk and critical vulnerabilities (per the entity’s vulnerability risk rankings defined at Requirement 6.3.1) are resolved.
      • Rescans are conducted as needed (significant changes..).
  • External vulnerability scans:
    • 11.3.2.1 External vulnerability scans are performed after any significant change as follows:
      • Vulnerabilities that are scored 4.0 or higher by the CVSS are resolved.
      • Rescans are conducted as needed (significant changes..).
  • Internal penetration testing:
    • 11.4.2 Internal penetration testing is performed:
      • Per the entity’s defined methodology, at least once every 12 months
      • After any significant infrastructure or application upgrade or change
      • By a qualified internal resource or qualified external third-party
      • Organizational independence of the tester exists (not required to be a QSA or ASV).
  • External penetration testing:
    • 11.4.3 External penetration testing is performed:
      • Per the entity’s defined methodology, at least once every 12 months
      • After any significant infrastructure or application upgrade or change
      • By a qualified internal resource or qualified external third party
      • Organizational independence of the tester exists (not required to be a QSA or ASV).
  • Significant changes are defined in PCI DSS to include (PCI-DSS-v4_0.pdf page 26):
    • New hardware, software, or networking equipment added to the CDE.
    • Any replacement or major upgrades of hardware and software in the CDE.
    • Any changes in the flow or storage of account data.
    • Any changes to the boundary of the CDE and/or to the scope of the PCI DSS assessment.
    • Any changes to the underlying supporting infrastructure of the CDE (including, but not limited to, changes to directory services, time servers, logging, and monitoring).
    • Any changes to third party vendors/service providers (or services provided) that support the CDE or meet PCI DSS requirements on behalf of the entity.


Thursday, September 14, 2023

Using Maturity Levels and Qualitative Measurement for Visualizing Technology Implementations

Example Maturity Model

Check out this maturity model. What does it mean to measure the maturity of a technology implementation qualitatively? And how can maturity levels help visualize the current and future states to meet control requirements? 

Let's unpack these concepts and show how qualitative measures can enrich the maturity model process, particularly with the use of visualization techniques like bar or radar graphs.

Maturity models serve as diagnostic tools, usually consisting of a sequence of maturity levels that provide a path for improvements. These models are vital for benchmarking and identifying the best practices that need to be implemented for organizational success. In technology implementation, they can gauge how effectively an organization is meeting its control requirements—be it in data security, governance, or software development lifecycle.

The Qualitative Dimension

While numbers and metrics provide a certain level of clarity, they often lack context. Qualitative measurements step in here to provide nuanced insights into otherwise cold data. Through expert interviews, case studies, and scenario analyses, qualitative assessments can address 'how' and 'why' questions that numbers cannot.

One of the powerful ways to present the qualitative aspect of maturity models is through visualization. A bar or radar graph can be used to overlay the current and future states of an organization's maturity levels.

Current State

Imagine a bar graph where the X-axis represents different control requirements like "Data Encryption," "User Access Management," and "Compliance Monitoring," and the Y-axis represents maturity levels from 0 (Non-existent) to 5 (Optimized). The current state can be represented by blue bars reaching up to the current maturity level for each control requirement.

This visualization allows stakeholders to immediately grasp which areas are well-managed and which need improvement. It's not just about the height of the bar but the story behind each bar—which can be enriched by qualitative inputs like expert opinions, employee feedback, and process reviews.

Future State

In the same graph, future state scenarios can be represented by a different color—say, green bars—overlaying or adjacent to the current state bars. These future state bars are not arbitrary but are informed by qualitative measures like scenario planning, risk assessments, and strategic discussions.

The juxtaposition of current and future states in one graph offers a compelling narrative. It shows where the organization aims to be, providing a clear vision for everyone involved.

Utility of Qualitative Maturity Models

Maturity levels, when fleshed out with qualitative measurements, offer more than a snapshot of the present; they provide a roadmap for the future. Visual representations like bar or radar graphs give life to these qualitative insights, making them easy to understand and act upon.

So, the next time you consider assessing your organization’s technology maturity, think beyond numbers. Look at the stories those numbers can tell, and use qualitative measures to fill in the gaps. And don't just keep these insights in spreadsheets and reports—visualize them. 

Combine qualitative measures with visualization techniques and build a more meaningful, actionable, and comprehensive roadmap. Aim for a balanced, nuanced, and visually engaging approach to understand the current state and opportunity for improvement.

Example Output 

Here's a quick assessment of an organization's adherence to the NIST Privacy Framework. The beauty of this method - by the way - is that it's fast and easy to create this chart using qualitative measures. Search for the Privacy Framework spreadsheet under the downloads section if you want a copy of this.

Tuesday, September 5, 2023

NIST.SP.800-66r2.ipd Worksheet - HIPAA Indexed on NIST

HIPAA to NIST and NIST to HIPPA indexed worksheets in a single spreadsheet based on the Initial Public Draft (ipd) are posted on the downloads website. Look for the workbook 2022 HIPAA Crosswalk SP 800-66 ipd Table 12 on:

www.compliancequickstart.com.

Friday, September 1, 2023

Numbers and Narratives: The Power of Qualitative and Quantitative Feedback

While technological prowess is crucial for cybersecurity, human factors are often the linchpin that determines an organization's susceptibility to cyber threats. As we navigate this ever-evolving landscape, the role of learning programs in enhancing cybersecurity awareness cannot be overstated. But how do we measure the effectiveness of these initiatives? The answer lies in a meticulous blend of quantitative and qualitative feedback.

The Quantitative Dimension

In the realm of cybersecurity learning programs, quantitative data acts as the backbone that offers empirical evidence of program effectiveness. This data, collected through various channels—from real-world cybersecurity incidents and metrics on employee reporting to targeted simulations and longitudinal studies—provides a measurable barometer of your organization's cybersecurity posture. It can also help tailor training materials to specific departments, evaluate ROI, and keep content up to date. This section will detail the key types of quantitative data that you should focus on, offering a robust framework for continuously enhancing your cybersecurity initiatives through actionable metrics.

  1. Cybersecurity Incident Data - Utilize real-world data on past incidents to simulate realistic scenarios in your training programs. For example, if there has been a rise in phishing attacks, including similar scenarios in your learning modules can help prepare the workforce better.
  2. Metrics on Incident Reporting - Review how many employees report potential cybersecurity events pre- and post-training. An increase in reports post-training could indicate higher awareness.
  3. Simulated Attack Responses - Phishing simulations can provide invaluable data. If 90% of your employees ignore a phishing email post-training compared to 50% pre-training, you know you’re on the right track.
  4. Longitudinal Data - Track the program's impact over time to identify trends. Maybe the initial spike in awareness drops after six months, indicating a need for refresher courses.
  5. Employee Testing Data - Compare employee cybersecurity test scores before, immediately after, and three months post-training to assess knowledge retention.
  6. Performance by Department - Do tech departments outperform sales in cybersecurity awareness? This could guide department-specific training.
  7. Training Attendance and Completion Rates - Low attendance or completion could indicate that the training is too cumbersome or not engaging enough.
  8. Quantitative Surveys and Costs - Use closed-ended surveys for quick, quantifiable feedback. Also, calculate the per-participant cost of developing and delivering the program for ROI assessment.
  9. Privacy and Technical Metrics - Track the frequency and type of privacy or cybersecurity events to identify the need for role-based training. Changes following technical training—like a reduction in accounts with privileged access—can also be invaluable metrics.

The Qualitative Dimension

While quantitative metrics provide the hard facts, it's the qualitative data that enriches our understanding by adding context, nuance, and depth to these numbers. Qualitative feedback captures the human elements that are often overlooked in cybersecurity initiatives. From capturing employees' responses about the program's delivery and content to conducting focus groups for in-depth insights, qualitative data allows us to gauge the intangibles that make or break a learning program. In this section, we will delve into various types of qualitative feedback, including presenter evaluations, open-ended surveys, and even observations from the training sessions, to provide a more holistic assessment of your cybersecurity education efforts.

  1. Presenter and Program Feedback - Encourage employees to share feedback on trainers and program content to make real-time improvements.
  2. Open-Ended Surveys and Reports - Use these to gather nuanced opinions. Maybe the training material is excellent, but the pace is too fast?
  3. Focus Groups and Observations - Conduct these with a cross-section of employees to get richer insights into the learning experience, identifying areas for improvement.
  4. Suggestion Box - A suggestion box allows employees to provide candid feedback and innovative ideas for program improvement.

A Marriage of Metrics and Mindsets 

Combining quantitative data with qualitative insights will not only paint a comprehensive picture of your program's effectiveness but will also guide data-informed decisions for future improvements. For instance, if your quantitative data indicates high knowledge retention but qualitative feedback points to low engagement, you may need to inject more interactive elements into your program. Because when it comes to cybersecurity, an empowered workforce is your best line of defense. 

And if you haven't already, check out NIST Special Publication 800-50 and look for the upcoming Rev. 1. This is a comprehensive guideline that serves as an invaluable resource for information security education, training, and awareness. Thank you to NIST and the industry authors and contributors for your tireless work in advancing the field and providing a foundational resource for cybersecurity professionals everywhere.