Wednesday, March 4, 2020

Cloud Security & Compliance Success = Repeating a Thousand Good Decisions.

Click the image to see detail.
There is no way about it. You must maintain the security of your systems – and the compliance of your systems – over time.

What does this mean? How do we get there? Here's the short answer.

Assurance is the confidence in your ability to provide for the confidentiality, integrity, availability, and accounting of your systems (NIST). Getting your systems set up "in the cloud" is one thing. Doing it with the appropriate controls to enforce security and meet compliance requirements… That's another. Step up the maturity of your processes and maintain your security and compliance posture over time. Step it up further to provide actionable intelligence for proactive (automatic or manual) responses. Document everything. Details matter.

This has been discussed before, and this is an illustration that I created a few years ago. It came up again this week while discussing governance with a peer. Maintaining and governing systems requires work. It. Just. Does.

Automation is a great tool, but then, consider the details as well. What is your scope? Is it accurate? How much of it is automation cover? Who covers the rest? What about the business processes? Which ones are supportive of your endeavor? Which ones are antagonistic? What is an acceptable substantive change? According to what metrics? How do you measure your success?

It's difficult to do. It requires revisiting previously executed routines multiple times and measuring your results. Success is derived when you stop looking at this as a series of athletic sprints (not to be confused with agile processes). Set yourself up with the endurance to excel in ongoing operations.

Wednesday, February 12, 2020

15 Top-Paying IT Certifications for 2020

Here's an interesting article from Global Knowledge. Notice that the top certifications are focused on cloud, security, and risk. Importantly, both here and from some of the pundits discussing this, you'll see that in many cases multiple certifications matter, demonstrating a broad level of knowledge, and the average level of experience may be more than you think.

Take a look at the details here:
https://www.globalknowledge.com/us-en/resources/resource-library/articles/top-paying-certifications/.

Top-paying certifications:

  1. Google Certified Professional Cloud Architect — $175,761
  2. AWS Certified Solutions Architect – Associate — $149,446
  3. CISM – Certified Information Security Manager — $148,622
  4. CRISC – Certified in Risk and Information Systems Control — $146,480
  5. PMP® – Project Management Professional — $143,493
  6. CISSP – Certified Information Systems Security Professional — $141,452
  7. CISA – Certified Information Systems Auditor — $132,278
  8. AWS Certified Cloud Practitioner — $131,465
  9. VCP6-DCV: VMware Certified Professional 6 – Data Center Virtualization — $130,226
  10. ITIL® Foundation — $129,402
  11. Microsoft Certified: Azure Fundamentals — $126,653
  12. Microsoft Certified: Azure Administrator Associate — $125,993
  13. CCA-N: Citrix Certified Associate – Networking — $125,264
  14. CCNP Routing and Switching — $119,178
  15. CCP-V: Citrix Certified Professional – Virtualization — $117,069

Monday, February 10, 2020

Scaled Agile Framework®: SAFe® for Lean Enterprises

Image Credit: https://www.scaledagileframework.com

I'm a supportive fan of what Dean Leffingwell, Creator of SAFe® and Chief Methodologist at Scaled Agile, Inc., has done with Agile development. He's created an excellent body of work. Please take a minute to view his website for more information and learn how to leverage his work and lower your project risk with your own teams.

Why is this important? Let's say your strategy and technology decisions are correct. Tactical execution is difficult. Presuming your strategic direction is on target, how do you get there? Effective execution can be learned, and here's one place to start. The unfortunate reality is that poorly executed projects often result in insecure systems. Security controls are poorly implemented or left out entirely. I've often said that simply based on the paperwork alone, I get a good feel for how well the security controls are implemented. Stop taking shortcuts. Do it right the first time.

Following is from his website:

The latest version, SAFe 5.0, is built around the Seven Core Competencies of the Lean Enterprise that are critical to achieving and sustaining a competitive advantage in an increasingly digital age: 

  1. Lean-Agile Leadership – Advancing and applying Lean-Agile leadership skills that drive and sustain organizational change by empowering individuals and teams to reach their highest potential 
  2. Team and Technical Agility – Driving team Agile behaviors as well asl sound technical practices including Built-in Quality, Behavior-Driven Development (BDD), Agile testing, Test-Driven Development (TDD), and more 
  3. Agile Product Delivery – Building high-performing teams-of-teams that use design thinking and customer-centricity to provide a continuous flow of valuable products using DevOps, the Continuous Delivery Pipeline, and Release on Demand 
  4. Enterprise Solution Delivery – Building and sustaining the world’s largest software applications, networks, and cyber-physical solutions 
  5. Lean Portfolio Management – Executing portfolio vision and strategy formulation, chartering portfolios, creating the Vision, Lean budgets and Guardrails, as well as portfolio prioritization, and roadmapping 
  6. Organizational Agility – Aligning strategy and execution by applying Lean and systems thinking approaches to strategy and investment funding, Agile portfolio operations, and governance 
  7. Continuous Learning Culture – Continually increasing knowledge, competence, and performance by becoming a learning organization committed to relentless improvement and innovation

Tuesday, January 28, 2020

Resources Serving Veterans

More than 900 hours of free cybersecurity training for any government employee or veteran. Awesome.

The barrier to learning has been lowered to a decision of whether you want to do it or not. Go do it. Make it happen. Make a decision to improve the rest of your life today.

"The Federal Virtual Training Environment (FedVTE) provides free online cybersecurity training to federal, state, local, tribal, and territorial government employees, federal contractors, and US military veterans."
Also, fellow veterans... check out:

Friday, January 24, 2020

Homage to the Legends. Reference Monitor - October 1972 - James P. Anderson

Recently, I was asked to speak to a group of middle school students. One of the questions that I asked them was, "Why is it possible to build ever-increasingly complex software and systems?" The answer is simple. Many have gone before us to build the foundations and think through complex problems. We get to benefit from their knowledge and accelerate our learning.

Occasionally, there are special luminaries that either by sheer luck or sheer genius create breakthroughs in thinking that everyone else gets to benefit from.

James P. Anderson is responsible for pushing the understanding and usefulness of several powerful and simple constructs in computer security.

I continue to hear much of his material regurgitated as original thought. It's fine, insofar as somebody really does have an original thought. It's interesting to me just how relevant his research and ideas are today, nearly half a century after his original content was published.
NIST: http://csrc.nist.gov/publications/history/ande72a.pdf 

Understanding the History
(text below is from the Orange Book.)
Trusted Computer System Evaluation Criteria
https://fas.org/irp/nsa/rainbow/std001.htm
DoD 5200.28-STD
Library No. S225,7ll
DEPARTMENT OF DEFENSE STANDARD

Historical Perspective
In October 1967, a task force was assembled under the auspices of the Defense Science Board to address computer security safeguards that would protect classified information in remote-access, resource-sharing computer systems. The Task Force report, "Security Controls for Computer Systems," published in February 1970, made a number of policy and technical recommendations on actions to be taken to reduce the threat of compromise of classified information processed on remote-access computer systems.[34]  Department of Defense Directive 5200.28 and its accompanying manual DoD 5200.28-M, published in 1972 and 1973 respectively, responded to one of these recommendations by establishing uniform DoD policy, security requirements, administrative controls, and technical measures to protect classified information processed by DoD computer systems.[8;9]  Research and development work undertaken by the Air Force, Advanced Research Projects Agency, and other defense agencies in the early and mid 70's developed and demonstrated solution approaches for the technical problems associated with controlling the flow of information in resource and information sharing computer systems.[1]  The DoD Computer Security Initiative was started in 1977 under the auspices of the Under Secretary of Defense for Research and Engineering to focus DoD efforts addressing computer security issues.[33]

Concurrent with DoD efforts to address computer security issues, work was begun under the leadership of the National Bureau of Standards (NBS) to define problems and solutions for building, evaluating, and auditing secure computer systems.[17]  As part of this work NBS held two invitational workshops on the subject of audit and evaluation of computer security.[20;28]  The first was held in March 1977, and the second in November of 1978.  One of the products of the second workshop was a definitive paper on the problems related to providing criteria for the evaluation of technical computer security effectiveness.[20]  As an outgrowth of recommendations from this report, and in support of the DoD Computer Security Initiative, the MITRE Corporation began work on a set of computer security evaluation criteria that could be used to assess the degree of trust one could place in a computer system to protect classified data.[24;25;31]  The preliminary concepts for computer security evaluation were defined and expanded upon at invitational workshops and symposia whose participants represented computer security expertise drawn from industry and academia in addition to the government.  Their work has since been subjected to much peer review and constructive technical criticism from the DoD, industrial research and development organizations, universities, and computer manufacturers.

The DoD Computer Security Center (the Center) was formed in January 1981 to staff and expand on the work started by the DoD Computer Security Initiative.[15]  A major goal of the Center as given in its DoD Charter is to encourage the widespread availability of trusted computer systems for use by those who process classified or other sensitive information.[10]  The criteria presented in this document have evolved from the earlier NBS and MITRE evaluation material.

\\ Here is one of the more powerful security constructs that I use on a regular basis as a simple model that can be explained to others. I first learned this in 1999 as part of my CISSP studies.

6.1  THE REFERENCE MONITOR CONCEPT
In October of 1972, the Computer Security Technology Planning Study, conducted by James P.  Anderson & Co., produced a report for the Electronic Systems Division (ESD) of the United States Air Force.[1]  In that report, the concept of "a reference monitor which enforces the authorized access relationships between subjects and objects of a system" was introduced.  The reference monitor concept was found to be an essential element of any system that would provide multilevel secure computing facilities and controls.

The Anderson report went on to define the reference validation mechanism as "an implementation of the reference monitor concept .  .  .  that validates each reference to data or programs by any user (program) against a list of authorized types of reference for that user." It then listed the three design requirements that must be met by a reference validation mechanism:
     a. The reference validation mechanism must be tamper proof.
     b. The reference validation mechanism must always be invoked.
     c. The reference validation mechanism must be small enough to be subject to analysis and tests, the completeness of which can be assured."[1]

Extensive peer review and continuing research and development activities have sustained the validity of the Anderson Committee's findings.  Early examples of the reference validation mechanism were known as security kernels.  The Anderson Report described the security kernel as "that combination of hardware and software which implements the reference monitor concept."[1]  In this vein, it will be noted that the security kernel must support the three reference monitor requirements listed above.

Sunday, October 6, 2019

Dynamically Manage Technology Risk

How do you effectively deliver on the promise to your stakeholders to create a secure and compliant environment for the data you have in your possession? Highly regulated industries have always struggled with this question. The purpose of compliance is to secure data assets related to applicable regulations and standards for a data set. This suggests – and is written into every compliance regulation, standard, and security best practice – the implementation of a risk management process.

Herein lies the secret. Implementation of a risk management framework alone is a failure. Governance is mastery of the maturity necessary to deliver trusted levels of assurance. Delivered with confidence. Cold hard facts. Where is the quantifiable data? Where are the artifacts? How are they trending?

The number of intersecting information vectors makes it nearly impossible to deliver security and compliance with any real confidence – trusted levels of assurance. This is true for any industry, and particularly true for highly regulated industries that must face the growing diversity of regulations, standards, and security best practices across internal, external, and geopolitical pressures. This is complicated further by the duplicity of compliance requirements without a single governing and centralized source of information defining the intent or expected implementation across all technologies. This is the elephant in the room. This is the situation. The complexity lies in the ever-growing legal jargon of requirements and the exponential growth in new market-disrupting capabilities driven by new technologies and new platforms. The underlying question is whether what you’re doing now is meeting the bare necessity, the status quo, or do you have an opportunity to change the game.

Providers and Consumers
Managing sensitive data is a significant challenge as the number of factors providing inputs into the operational processes and the number of recipients consuming the outputs of the operational processes continue to increase. This is true of all information that is stored, processed, and transmitted across your infrastructure. The additional challenge in this regard is from vendors provided sensitive access to your systems and the growing rights of consumers over their own information. This affects business relationships, consumer relationships, and puts a square demand on the organization to demonstrate to both that they can protect their data.

Cost of Compliance
The cost of compliance continues to escalate for obvious – and not so obvious reasons. The obvious reason is that we must deal with an increasing number of regulations and standards that affect our business, particularly in highly regulated industries and geopolitically affected organizations. However, there is more.

There is a disconnect between the steps necessary to build a secure (compliant) system (and architecture) and the steps necessary to sustain compliant systems. The first one involves building a system securely at a point state in time. The second one involves understanding that state changes over time and governing factors such as patch management, configuration management, architecture reviews, policy reviews, and several other factors affect the ongoing security and compliance of that same system.

In simple terms, this is a governance problem. This leads to dangerous situations in which you have a perceived risk (what you think you know) that is less than your real risk (what you don’t know) because you don’t have the complete story or it’s not properly understood and calculated in context. Much could be written about both, but these are self-explanatory. You either view and calculate risk in context – addressing out-of-bounds conditions – or you suffer the consequences.

Tools are bought to address this problem. However, tools are unfortunately often poorly utilized, and poorly understood. The financial services sector has been noted by several studies as having the most tools of any industry vertical. This means that financial services have more panes of glass, and possibly more information and input to identify issues and problems. The reality is that unless you have a system for prioritizing the impact in both a general and very specific sense, you will never be able to address the issues that eventually lead to an infrastructure breach.

Related to this is the reactive phenomenon of external audits. It’s quite interesting, like tax season, everyone seems caught off guard during an external audit. Why is an external audit a surprise? Why is it so often a forcing function? The situation would change completely given a system that is designed to manage and store for future retrieval the essential artifacts requested during an audit. This takes the burden off system operators from running around checking their systems and shifts their focus to driving the day-to-day activities they are hired and trained to perform.

Cost of Security
As the tired story of growing threats continues to get the headlines, the perpetrator for many of the successful penetrations is related to the increasingly agile business. Organizations are turning new features and technologies out faster than they can effectively review them to create a market edge. This is challenging because you need to innovate and respond to the market with new features and capabilities to create parity with, or dominate, your competition. However, as an attacker, I just need one mistake.

The situation for security is that we have a complex and dynamic IT infrastructure extended across locations, acquisitions, technology platforms, and decades of technical debt.

The complexity here is compounded tremendously when you consider the challenges of creating “cloud-native” applications alongside existing mainframe technologies. Cloud-native is a moniker that can include containers or any other technology that allows elasticity and other advantages of cloud technologies. Many of these are still tightly interconnected. The challenge to change is equally met by the challenge to secure. The question then becomes, “How can I dynamically manage technology risk?


Monday, August 19, 2019

The battle to encrypt data or not...


Putting things into perspective :) Encryption...

Back it up a bit! Purpose of encryption – (a) access controls and (b) confidentiality

Let's understand the battle surrounding whether to encrypt data or not. Let's understand why we need encryption in the first place. Encryption was originally used to make absolutely sure data would remain confidential to people that shouldn't access the information. If you can't read the data because you don't have the key to the data, then we presume you don't need access.

Encryption, therefore, protects data whenever we (a) lose control over who can read the raw data or (b) where the raw data is located.

Control landscape includes encryption (and now we understand why)

Understanding this, we can see why the most common regulations and standards require encrypting sensitive data. This can be healthcare information, privacy information, financial data, Defense data, etc. This requirement gets written into law and legally binding contracts. Organizations don't have a choice whether they are supposed to encrypt certain types of information or not. 

Understand why people don't use encryption (system cost, software cost, user cost/experience)

There are several reasons why people don't want to do it. The software costs money. There may be a perceived performance impact. There may be a perceived impact on user experience. Maybe the thought of encryption sounds intimidating or complex.

Bruce Schneier wrote attack the system as well as the algorithm (and he's right)

Many years ago, Bruce Schneier wrote a book titled Applied Cryptography. In this book, he explains the purpose of cryptography and how to correctly apply it to solve business problems. Afterward, he wrote another book titled Secrets and Lies where he explains the effective use of encryption is much more than the math that goes into protecting data. Think about that. It's more than the algorithm. It's all of the supporting pieces of the process. It's the system itself in which the encryption is used, that represents the available angles of attack. Consider a door with a world-class lock. It's in a house with a giant window in the front. Do I attack this the lock, or smash the window? This is what happened in the Capital One breach.

Encryption is hard. Why? Encryption is hard because it's more than just choosing a strong algorithm to protect the data. I must ensure that the use of the algorithm doesn't inadvertently open doors around my encryption. I must constantly review and validate the configuration of the endpoints, application configuration, access controls, auditing processes, and so many other items could prevent attacks similar to what happened with Capital One.

Caveonix integration with HyTrust… (validate the system and use of the algorithm)

This is why our company Caveonix jumped at the opportunity to integrate with HyTrust. You must validate the use of encryption because it's required, and you must validate the system itself to ensure there is no way around the encryption. This is why Caveonix chose to integrate with HyTrust as part of IBM's approach to securing Financial Services. Together, Caveonix and HyTrust can do both.


CapitalOne breach with Bloomberg Technology’s Emily Chang

This was an incredible opportunity to appear on Bloomberg TV and share my thoughts on the CapitalOne breach. Doing what they are doing is harder than it may sound. This is why I love working at Caveonix and researching ways to address these challenges.

From our PR desk: "Last night our Vice President of Product Management, Chris Davis talked about the #CapitalOne breach, #cloudsecurity, and systems #misconfiguration with Bloomberg Technology’s Emily Chang. Check out the interview here at the 21-minute mark:  https://www.bloomberg.com/news/videos/2019-08-16/-bloomberg-technology-full-show-8-15-2019-video"

Friday, May 17, 2019

You simply cannot manage what you cannot see.

Another round of questions for an article contribution... 

Sometimes, risk and compliance can be at loggerheads. You need to mitigate a risk, but can you do it and still be compliant?

Can you? Absolutely. Risk and compliance are very rarely in conflict. (No *good* examples come to mind...) The entire GRC model created by Michael Rasmussen when he was an analyst at Forrester presumes that compliance is addressed first, and any additional safeguards necessary to protect data assets are then reviewed and applied according to the probability and impact of adverse events affecting the data assets. This is the definition of risk management. Therefore, compliance is addressed first, and any additional measures necessary to protect data – risk management – are applied.

Where are the biggest stresses and strains in the governance, risk and compliance balancing act?

By far – Governance. It's one thing to get your systems compliant, appropriately risk managed, and ready for operations on day one. However, managing day to operations and beyond is extremely difficult. Mature organizations and excellent leadership implement compliance and security hygiene processes, often called entity level controls, and communicate their importance throughout the organization. When this is done well, projects default to security-first and compliance-first postures. The result? Security and compliance are implemented throughout the development and deployment of new systems, becoming part of the solution instead of part of the problem when it's time to go into production.

Do you have any advice on “getting it right”?

You simply cannot manage what you cannot see. Visibility into the inner workings of the workloads and infrastructure the provides insight into your compliance configuration and potential vulnerability exposures is a necessary input for governance processes to manage security and compliance.