Thursday, June 5, 2025

Learning Never Stops

In the technology industry, the moment you stop learning is the moment you start falling behind. This isn't just a motivational platitude—it's the reality of working in a field where new services, features, and best practices emerge constantly. The question isn't whether you need to keep learning; it's how you can make that learning both effective and sustainable.

Why Continuous Learning Isn't Optional Anymore

The cloud computing landscape changes at breakneck speed. Consider that AWS launches hundreds of new features and services each year. And other providers are also growing. And the ecosystem of solution providers continues to evolve. 

The security practices that were cutting-edge two years ago might be baseline expectations today. The architectural patterns you mastered last quarter could be superseded by more efficient approaches next month.

This rapid evolution creates both a challenge and an opportunity. Those who embrace continuous learning don't just keep up—they get ahead. They become the go-to experts, the problem solvers, the ones who can navigate new challenges with confidence because they've built a habit of staying current.

The Compound Effect of Consistent Learning

Here's what many professionals miss: learning isn't just about acquiring new skills—it's about building your capacity to learn faster. Each new concept you master makes the next one easier to grasp. Every hands-on experience strengthens your ability to troubleshoot, adapt, and innovate.

Think of it as compound interest for your career. Small, consistent investments in learning create exponential returns over time. The AWS engineer who regularly explores new services doesn't just know more tools—they develop an intuitive understanding of how AWS thinks, making them exponentially more effective at solving complex problems.

Moving Beyond Documentation: The Power of Applied Learning

Reading documentation and watching videos has its place, but true expertise comes from hands-on experience. This is where initiatives like AWS Activation Days become invaluable in your learning strategy. These events represent exactly what effective continuous learning should look like:

Structured, Yet Practical: Rather than random skill acquisition, these workshops provide guided pathways through complex topics like container security, threat detection, and secrets management—areas that are critical today and will only become more important.

Risk-Free Experimentation: Working in sandbox environments means you can break things, make mistakes, and learn from them without consequences. This is the kind of safe-to-fail learning environment that accelerates skill development.

Expert Guidance: Learning alongside AWS specialists means you're not just following tutorials—you're absorbing best practices, common pitfalls, and real-world insights that only come from experience.

Building Learning Into Your Career Strategy

The most successful cloud professionals don't treat learning as something they do when they have time—they make it a non-negotiable part of their professional routine. Here's how events like Activation Days fit into that strategy:

Regular Skill Refreshers: Even if you think you know AWS security, attending a security-focused workshop often reveals new approaches or updates you've missed. Technology moves fast; our assumptions about what we know need regular validation.

Exposure to Adjacent Skills: That networking workshop might not seem directly relevant to your database-focused role, but understanding how networking impacts database performance makes you a more complete professional.

Future-Proofing: By regularly engaging with new services and approaches, you're building familiarity with where the industry is heading, not just where it's been.

The Real Return on Investment

Time spent learning isn't time away from "real work"—it's the most important work you can do for your long-term career success. The professional who dedicates six hours to a comprehensive security workshop isn't just learning about AWS WAF and GuardDuty; they're:

  • Building confidence to tackle complex security challenges
  • Developing a vocabulary to communicate effectively with security teams
  • Creating a foundation for even more advanced learning
  • Positioning themselves as someone who takes initiative in their professional development

Making It Sustainable

The key to successful continuous learning is making it manageable. Look for opportunities that deliver maximum impact with focused time investment. Activation Days are designed exactly for this—comprehensive learning experiences that fit into a busy professional schedule while delivering immediately applicable skills.

The goal isn't to learn everything about everything. It's to consistently build your expertise in areas that matter to your career while staying curious and adaptable enough to grow with the industry.

Start Now

The cloud computing field will continue to evolve rapidly. The professionals who thrive won't be those who knew the most at any given moment—they'll be those who developed the strongest learning habits. Every workshop attended, every new service explored, and every hands-on lab completed is an investment in a career that can adapt and grow with the technology landscape.

Monday, May 19, 2025

Interview with God

Posted on LinkedIn by a wonderful soul. May you all be blessed to have people like this in your lives. 

Michaela Iorga, PhDMichaela Iorga, PhD • 1st1stNIST OSCAL Director * Senior Security Lead * AdvisorNIST OSCAL Director * Senior Security Lead * Advisor2d •  2 days ago • Visible to anyone on or off LinkedIn

There is a Romanian poem by Octavian Paler that gave me strength when I lost my baby boy. It’s wisdom might help others too.. The part “Learn that it’s just a matter of seconds to cause severe wounds in the hearts of the loved ones…and that it takes several years for these to heal; learn that a rich man isn’t the one who has the most, but the one who needs the least” was a motto for me ..

INTERVIEW WITH GOD
— by Octavian Paler

“So, you would like to take me an interview”…said God.

“If you have time”…I replied.

God smiled.
”My time is eternity… What questions would you like to ask me?”

“What is the most surprising thing that you find in humans?”

God answered:
”The fact that they get bored of childhood, and are in a rush to grow up…, then they crave for being children again; they waste their health making money…and afterwards they spend money to regain health.
The fact that they ponder over the future with fear and forget the present, therefore they live neither the future nor the present; they lead their lives like they will never die and die like they have never lived.”

God took my hand and we remained silent for a while. Then I asked:

“As a parent, what life lesson would you like your children to value most?”

“Learn that it’s just a matter of seconds to cause severe wounds in the hearts of their loved ones…and that it takes several years for these to heal; learn that a rich man isn’t the one who has the most, but the one who needs the least; they should learn that there are people who love them, but just don’t know how to express their feelings; learn that two people can look at the same things and see it in a different way; learn that it is not enough to forgive the others, they also have to forgive themselves.”

“Thank you for your time…”I humbly spoke. “Would there be something more that you want people to know?”

God looked at me smiling and said:

“Only the fact that I am here…always.”


Monday, April 7, 2025

Databricks AI Security Framework (DASF) | Third-party Tools

Great work - Amazing work - by the team at Databricks. Nice job!

Databricks AI Security Framework (DASF) | Databricks

This link leads to a PDF that selflessly has links to a LOT of information. Thank you for including them!

Here's one such list. I'm storing it here as a quick yellow sticky. Go check out their work for more. 

Tool Category

Tool URL

Tool Description

Model Scanners

HiddenLayer Model Scanner

A tool that scans AI models to detect embedded malicious code, vulnerabilities, and integrity issues, ensuring secure deployment.

Fickling

An open-source utility for analyzing and modifying Python pickle files, commonly used for serializing machine learning models.

Protect AI Guardian

An enterprise-level tool that scans third-party and proprietary models for security threats before deployment, enforcing model security policies.

AppSOC’s AI Security Testing solution

AppSOC’s AI Security Testing solution helps in proactively identifying, and assessing the risks from LLM models by automating model scanning, simulating adversarial attacks, and validating trust in connected systems, ensuring the models and ecosystems are safe, compliant, and deployment-ready

Model Validation Tools

Robust Intelligence Continuous Validation

A platform offering continuous validation of AI models to detect and mitigate vulnerabilities, ensuring robust and secure AI deployments.

Protect AI Recon

A product that automatically validates LLM Model performance across common industry framework requirements (OWASP, MITRE/ATLAS).

Vigil LLM security scanner

A tool designed to scan large language models (LLMs) for security vulnerabilities, ensuring safe deployment and usage.

Garak Automated Scanning

An automated system that scans AI models for potential security threats, focusing on detecting malicious code and vulnerabilities.

HiddenLayer AIDR

A solution that monitors AI models in real time to detect and respond to adversarial attacks, safeguarding AI assets.

Citadel Lens

A security tool that provides visibility into AI models, detecting vulnerabilities and ensuring compliance with security standards.

AppSOC’s AI Security Testing solution

AppSOC’s AI Security Testing solution helps in proactively identifying, and assessing the risks from LLM models by automating model scanning, simulating adversarial attacks, and validating trust in connected systems, ensuring the models and ecosystems are safe, compliant, and deployment-ready

AI Agents

Arhasi R.A.P.I.D

A platform offering rapid assessment and protection of AI deployments, focusing on identifying and mitigating security risks.

Guardrails for LLMs

NeMo Guardrails

A toolkit for adding programmable guardrails to AI models, ensuring they operate within defined safety and ethical boundaries.

Guradrails AI

A framework that integrates safety protocols into AI models, preventing them from generating harmful or biased outputs.

Lakera Guard

A security solution that monitors AI models for adversarial attacks and vulnerabilities, providing real-time protection.

Robust Intelligence AI Firewall

A protective layer that shields AI models from adversarial inputs and attacks.

Protect AI Layer

Layer provides LLM runtime security including observability, monitoring, blocking for AI Applications. The enterprise grade offering brought to you by the same team that built the industry leading open source solution LLM Guard.

Arthur Shield

A monitoring solution that tracks AI model performance and security, detecting anomalies and potential threats in real time.

Amazon Guardrails

A set of safety protocols integrated into Amazon's AI services to ensure models operate within ethical and secure boundaries.

Meta Llama Guard

Meta implemented security measures to protect their Llama models from vulnerabilities and adversarial attacks.

Arhasi R.A.P.I.D

A platform offering rapid assessment and protection of AI deployments, focusing on identifying and mitigating security risks.

DASF Validation and Assessment Products and Services

Safe Security

SAFE One makes cybersecurity an accelerator to the business by delivering the industry's only data-driven, unified platform for managing all your first-party and third-party cyber risks.

Obsidian

Obsidian Security combines application posture with identity and data security, safeguarding SaaS.

EQTY Labs

EQTY Lab builds advanced governance solutions to evolve trust in AI.

AppSOC

Makes Databricks the most secure AI platform with real-time visibility, guardrails, and protection.

Public AI Red Teaming Tools

Garak

An automated scanning tool that analyzes AI models for potential security threats, focusing on detecting malicious code and vulnerabilities.

Protect AI Recon

A product with a full suite of Red Teaming options for AI applications, including a library of common attacks, human augmented attacks, and LLM generated scans; complete with mapping to common industry frameworks like OWASP and MITRE/ATLAS.

PyRIT

A Python-based tool for testing the robustness of AI models against adversarial attacks, ensuring model resilience.

Adversarial Robustness Toolbox (ART)

An open-source library that provides tools to assess and improve the robustness of machine learning models against adversarial threats.

Counterfeit

A tool designed to test AI models for vulnerabilities by simulating adversarial attacks, helping developers enhance model security.

ToolBench

A suite of tools for evaluating and improving the security and robustness of AI models, focusing on detecting vulnerabilities.

Giskard-AI llm scan

A tool that scans large language models for security vulnerabilities, ensuring safe deployment and usage.

Hidden Layer - Automated Red Teaming for AI

A service that simulates adversarial attacks on AI models to identify vulnerabilities and strengthen defenses.

Fickle scanning tools

Utilities designed to analyze and modify serialized Python objects, commonly used in machine learning models, to detect and mitigate security risks.

CyberSecEval 3

A platform that evaluates the security posture of AI systems, identifying vulnerabilities and providing recommendations for mitigation.

Parley

A tool that facilitates secure and compliant interactions between AI models and users, ensuring adherence to safety protocols.

BITE

A framework for testing the security and robustness of AI models by simulating various adversarial attack scenarios.

Purple Llama

Purple Llama is an umbrella project that over time will bring together tools and evals to help the community build responsibly with open generative AI models. The initial release will include tools and evals for Cyber Security and Input/Output safeguards but we plan to contribute more in the near future.

 


Wednesday, April 2, 2025

Yes. There's a Lot. The DoD Cybersecurity Policy Chart - CSIAC


The DoD Cybersecurity Policy Chart - CSIAC

Quoting directly from their website. They said it well enough.

"The goal of the DoD Cybersecurity Policy Chart is to capture the tremendous scope of applicable policies, some of which many cybersecurity professionals may not even be aware of, in a helpful organizational scheme. The use of colors, fonts, and hyperlinks is designed to provide additional assistance to cybersecurity professionals navigating their way through policy issues in order to defend their networks, systems, and data.

At the bottom center of the chart is a legend that identifies the originator of each policy by a color-coding scheme. On the right-hand side are boxes identifying key legal authorities, federal/national level cybersecurity policies, and operational and subordinate level documents that provide details on defending the DoD Information Network (DoDIN) and its assets. Links to these documents can also be found in the chart."

Thursday, January 16, 2025

Training Links

Helpful post! This is from LinkedIn.

🚨 SHARE SOMEONE NEEDS IT 🚨
💥 𝐅𝐑𝐄𝐄 𝐈𝐓 𝐨𝐫 𝐂𝐲𝐛𝐞𝐫𝐬𝐞𝐜𝐮𝐫𝐢𝐭𝐲 𝐭𝐫𝐚𝐢𝐧𝐢𝐧𝐠!💥
Huge list of computer science resources (This one is great! Some links might not work, but I'm sure you can find them by doing a quick search) - https://lnkd.in/gQvxbypj

🔗 CompTIA Security+ - https://lnkd.in/gyFy_CG9
🔗 CISSP - https://lnkd.in/gUFjihpJ
🔗 Databases - https://lnkd.in/gWQmYwib
🔗 Penetration testing - https://lnkd.in/gAdgyY6h

🔗 Web application testing - https://lnkd.in/g5FkXWej

🔗 Weekly HackTheBox series and other hacking videos - https://lnkd.in/gztivT-D

🔗 Resources for practicing what you learned:

🔗 Network simulation software https://lnkd.in/gRMak7_x

🔗 Virtualization software https://lnkd.in/gFkyFVvF

🔗 Linux operating systems
https://lnkd.in/g2M__A5n
https://lnkd.in/gyc4R_F7
https://lnkd.in/gSiHYRNg
https://lnkd.in/g5GsUT7H

🔗 Microsoft Operating Systems
https://lnkd.in/gP3nxKpZ

🔗 Networking - https://lnkd.in/gNm8RhtS

🔗 More Networking - https://lnkd.in/ghqw2sHZ

🔗 Even More Networking - https://lnkd.in/g4fp8WFa

🐾 Linux - https://lnkd.in/g7KJBUYd

🐾 More Linux - https://lnkd.in/gUK8PU4p

🔗 Windows Server - https://lnkd.in/gWUTmN-5

🔗 More Windows Server- https://lnkd.in/gsWZQnwj

🔗 Python - https://lnkd.in/g_NpsqEM

🔗 Golang - https://lnkd.in/gmwz4ed5
🔗 Capture the flag
https://lnkd.in/gpnYs5Qj
https://www.vulnhub.com/
https://lnkd.in/gn2AEYhw
https://lnkd.in/g5FkXWej
Full credit :G M Faruk Ahmed, CISSP, CISA