Sunday, October 6, 2019

Dynamically Manage Technology Risk

How do you effectively deliver on the promise to your stakeholders to create a secure and compliant environment for the data you have in your possession? Highly regulated industries have always struggled with this question. The purpose of compliance is to secure data assets related to applicable regulations and standards for a data set. This suggests – and is written into every compliance regulation, standard, and security best practice – the implementation of a risk management process.

Herein lies the secret. Implementation of a risk management framework alone is a failure. Governance is mastery of the maturity necessary to deliver trusted levels of assurance. Delivered with confidence. Cold hard facts. Where is the quantifiable data? Where are the artifacts? How are they trending?

The number of intersecting information vectors makes it nearly impossible to deliver security and compliance with any real confidence – trusted levels of assurance. This is true for any industry, and particularly true for highly regulated industries that must face the growing diversity of regulations, standards, and security best practices across internal, external, and geopolitical pressures. This is complicated further by the duplicity of compliance requirements without a single governing and centralized source of information defining the intent or expected implementation across all technologies. This is the elephant in the room. This is the situation. The complexity lies in the ever-growing legal jargon of requirements and the exponential growth in new market-disrupting capabilities driven by new technologies and new platforms. The underlying question is whether what you’re doing now is meeting the bare necessity, the status quo, or do you have an opportunity to change the game.

Providers and Consumers
Managing sensitive data is a significant challenge as the number of factors providing inputs into the operational processes and the number of recipients consuming the outputs of the operational processes continue to increase. This is true of all information that is stored, processed, and transmitted across your infrastructure. The additional challenge in this regard is from vendors provided sensitive access to your systems and the growing rights of consumers over their own information. This affects business relationships, consumer relationships, and puts a square demand on the organization to demonstrate to both that they can protect their data.

Cost of Compliance
The cost of compliance continues to escalate for obvious – and not so obvious reasons. The obvious reason is that we must deal with an increasing number of regulations and standards that affect our business, particularly in highly regulated industries and geopolitically affected organizations. However, there is more.

There is a disconnect between the steps necessary to build a secure (compliant) system (and architecture) and the steps necessary to sustain compliant systems. The first one involves building a system securely at a point state in time. The second one involves understanding that state changes over time and governing factors such as patch management, configuration management, architecture reviews, policy reviews, and several other factors affect the ongoing security and compliance of that same system.

In simple terms, this is a governance problem. This leads to dangerous situations in which you have a perceived risk (what you think you know) that is less than your real risk (what you don’t know) because you don’t have the complete story or it’s not properly understood and calculated in context. Much could be written about both, but these are self-explanatory. You either view and calculate risk in context – addressing out-of-bounds conditions – or you suffer the consequences.

Tools are bought to address this problem. However, tools are unfortunately often poorly utilized, and poorly understood. The financial services sector has been noted by several studies as having the most tools of any industry vertical. This means that financial services have more panes of glass, and possibly more information and input to identify issues and problems. The reality is that unless you have a system for prioritizing the impact in both a general and very specific sense, you will never be able to address the issues that eventually lead to an infrastructure breach.

Related to this is the reactive phenomenon of external audits. It’s quite interesting, like tax season, everyone seems caught off guard during an external audit. Why is an external audit a surprise? Why is it so often a forcing function? The situation would change completely given a system that is designed to manage and store for future retrieval the essential artifacts requested during an audit. This takes the burden off system operators from running around checking their systems and shifts their focus to driving the day-to-day activities they are hired and trained to perform.

Cost of Security
As the tired story of growing threats continues to get the headlines, the perpetrator for many of the successful penetrations is related to the increasingly agile business. Organizations are turning new features and technologies out faster than they can effectively review them to create a market edge. This is challenging because you need to innovate and respond to the market with new features and capabilities to create parity with, or dominate, your competition. However, as an attacker, I just need one mistake.

The situation for security is that we have a complex and dynamic IT infrastructure extended across locations, acquisitions, technology platforms, and decades of technical debt.

The complexity here is compounded tremendously when you consider the challenges of creating “cloud-native” applications alongside existing mainframe technologies. Cloud-native is a moniker that can include containers or any other technology that allows elasticity and other advantages of cloud technologies. Many of these are still tightly interconnected. The challenge to change is equally met by the challenge to secure. The question then becomes, “How can I dynamically manage technology risk?

Monday, August 19, 2019

The battle to encrypt data or not...

Putting things into perspective :) Encryption...

Back it up a bit! Purpose of encryption – (a) access controls and (b) confidentiality

Let's understand the battle surrounding whether to encrypt data or not. Let's understand why we need encryption in the first place. Encryption was originally used to make absolutely sure data would remain confidential to people that shouldn't access the information. If you can't read the data because you don't have the key to the data, then we presume you don't need access.

Encryption, therefore, protects data whenever we (a) lose control over who can read the raw data or (b) where the raw data is located.

Control landscape includes encryption (and now we understand why)

Understanding this, we can see why the most common regulations and standards require encrypting sensitive data. This can be healthcare information, privacy information, financial data, Defense data, etc. This requirement gets written into law and legally binding contracts. Organizations don't have a choice whether they are supposed to encrypt certain types of information or not. 

Understand why people don't use encryption (system cost, software cost, user cost/experience)

There are several reasons why people don't want to do it. The software costs money. There may be a perceived performance impact. There may be a perceived impact on user experience. Maybe the thought of encryption sounds intimidating or complex.

Bruce Schneier wrote attack the system as well as the algorithm (and he's right)

Many years ago, Bruce Schneier wrote a book titled Applied Cryptography. In this book, he explains the purpose of cryptography and how to correctly apply it to solve business problems. Afterward, he wrote another book titled Secrets and Lies where he explains the effective use of encryption is much more than the math that goes into protecting data. Think about that. It's more than the algorithm. It's all of the supporting pieces of the process. It's the system itself in which the encryption is used, that represents the available angles of attack. Consider a door with a world-class lock. It's in a house with a giant window in the front. Do I attack this the lock, or smash the window? This is what happened in the Capital One breach.

Encryption is hard. Why? Encryption is hard because it's more than just choosing a strong algorithm to protect the data. I must ensure that the use of the algorithm doesn't inadvertently open doors around my encryption. I must constantly review and validate the configuration of the endpoints, application configuration, access controls, auditing processes, and so many other items could prevent attacks similar to what happened with Capital One.

Caveonix integration with HyTrust… (validate the system and use of the algorithm)

This is why our company Caveonix jumped at the opportunity to integrate with HyTrust. You must validate the use of encryption because it's required, and you must validate the system itself to ensure there is no way around the encryption. This is why Caveonix chose to integrate with HyTrust as part of IBM's approach to securing Financial Services. Together, Caveonix and HyTrust can do both.

CapitalOne breach with Bloomberg Technology’s Emily Chang

This was an incredible opportunity to appear on Bloomberg TV and share my thoughts on the CapitalOne breach. Doing what they are doing is harder than it may sound. This is why I love working at Caveonix and researching ways to address these challenges.

From our PR desk: "Last night our Vice President of Product Management, Chris Davis talked about the #CapitalOne breach, #cloudsecurity, and systems #misconfiguration with Bloomberg Technology’s Emily Chang. Check out the interview here at the 21-minute mark:"

Friday, May 17, 2019

You simply cannot manage what you cannot see.

Another round of questions for an article contribution... 

Sometimes, risk and compliance can be at loggerheads. You need to mitigate a risk, but can you do it and still be compliant?

Can you? Absolutely. Risk and compliance are very rarely in conflict. (No *good* examples come to mind...) The entire GRC model created by Michael Rasmussen when he was an analyst at Forrester presumes that compliance is addressed first, and any additional safeguards necessary to protect data assets are then reviewed and applied according to the probability and impact of adverse events affecting the data assets. This is the definition of risk management. Therefore, compliance is addressed first, and any additional measures necessary to protect data – risk management – are applied.

Where are the biggest stresses and strains in the governance, risk and compliance balancing act?

By far – Governance. It's one thing to get your systems compliant, appropriately risk managed, and ready for operations on day one. However, managing day to operations and beyond is extremely difficult. Mature organizations and excellent leadership implement compliance and security hygiene processes, often called entity level controls, and communicate their importance throughout the organization. When this is done well, projects default to security-first and compliance-first postures. The result? Security and compliance are implemented throughout the development and deployment of new systems, becoming part of the solution instead of part of the problem when it's time to go into production.

Do you have any advice on “getting it right”?

You simply cannot manage what you cannot see. Visibility into the inner workings of the workloads and infrastructure the provides insight into your compliance configuration and potential vulnerability exposures is a necessary input for governance processes to manage security and compliance.

Monday, May 6, 2019

How important is risk assessment?

Out of more than 300 controls in PCI DSS 3.2.1, here is the list of the top 10.

Dangerous (animals, munitions, substances… or data)
  • Know what you have
  • Know where it goes
  • Keep as little as possible
  • Destroy it when you can
  • Make sure you got it right (Risk Management)

Saturday, April 6, 2019

Notes on Data Governance -- How do you manage data?

1) What are the major EXTERNAL laws/regulations shaping data governance requirements this year?

Consumer privacy and data sovereignty are in the forefront of the news and the focus of regulatory compliance this year. GDPR, California Consumer Privacy Act, HIPAA, and others are driving internal business decisions related to technology use. Critically, the center point of many of these includes understanding the risk impact across the organization and including risk impact in your business decisions.  

2) What are the major INTERNAL factors or requirements that require more vigilant or comprehensive data governance?

Internal factors driving – demanding – more vigilant and comprehensive data governance include respecting consumer and government data rights. Consumers are increasingly savvy and protective of their privacy in the backwash of Facebook and Google’s data analytics practices, and governments continue to require data sovereignty. Organizations with multinational locations must deal with these complexity factors when managing data across use cases and geographic locations. In each case, however, data should be aggregated to the extent possible, protected and monitored per applicable and changing requirements. This is a real challenge that requires proactive management.   

3) Are organizations keeping up with these requirements?  

Organizations struggle with the requirements. The diverse topology of new cloud-inclusive hybrid architectures and the velocity of new requirements have created a blind spot to understand what needs to be done. There is a lot of activity but it is not always the right activity. Keeping up is a challenge.

4) What tools or technologies are assisting with efforts to achieve more effective data governance? (Note to vendors: okay to discuss product categories, such as AI/machine learning, but we cannot mention specific product names, sorry.)

Three specific elements – visibility, alignment, control – work together to provide effective data governance. There are tools and technologies which provide visibility across portions of the enterprise, perform analytics, and offer some control. The ideal solution digs deep to detect every asset across the enterprise, performs predictive analytics, and provides necessary and often-repeated actions such as reporting, mitigation, and forensic detail. The differences are in scope, scale, manageability, and producing truly meaningful information. If you can’t produce meaningful and actionable information from a tool, that it’s time to rethink the value of that tool.

5) What types of changes are necessary to organizations, and the way people are managed, to achieve more effective data governance?

Organizations looking to understand the people part of the data governance problem over time have to have a firm grip on the content of their data and the body of relevant requirements from regulations, standards, best practices, and organizational policies that apply. Content and Requirements. These can change often. The most effective way to manage people is to communicate the importance of data content and regulatory requirements, including how to provide effective data governance over compliance and data risk.

Saturday, March 16, 2019

Sales Engineering -- Quick Question Hit List

Need a direct list of short questions to identify where you fit in a potential solution? Here you go:

  1. How painful is this? 
  2. Who's involved? 
  3. When do you want to solve this? 
  4. What are you doing now? 
  5. How can I solve this?

A short list of considerations: 

  • Your customers buy from people they like. There's a complex vibe going on mixing personality, likability, respect, and trust. 
  • Keep it simple. The more convoluted the answers, the more I start looking for BS.
  • Avoid word games and cliches. Use industry terms in grammatically correct context. But let it go if someone speaks with artistic license as long as the point is made. 
  • Match tone and pace. If I'm tired - don't come at me like a rabid pit bull. Just slow down and chill. If I'm upbeat, then don't bring me down. Match me. Then take me for a ride. Watch experienced Grandmothers. They'll do this with small children, and it works like a charm for adults too. 

Sunday, March 3, 2019

Technology changes. Politics have not.

Excerpt from something I wrote many years ago. Thinking about this now because it's in a chapter I'm updating from nine years ago.

An Introduction to Legislation Related to Internal Controls
The global nature of business and technology has [...created standards bodies]. Participation in these standards bodies has been voluntary, with a common goal of promoting global trading for all countries at all levels. Individual countries have gone further to establish governmental controls on the business activities of corporations operating within their boundaries.

The reality is much more complex than this, as national interests, industry concerns, and corporate jockeying create political drivers that motivate the creation and adoption of legislation at all levels. Politics can have a negative connotation, but in this sense, we simply mean the clear understanding that regulations generally benefit or protect a representative group of people. Nations, industries, and companies have concerns about the confidentiality, integrity, and availability of information. Agreed upon standards and legislation is one method of ensuring concerns are met.

The unfortunate reality is that not all sets of requirements are written equally. Sometimes it's the interests involved. Sometimes it's the people. Self-interests, self-promotion, arrogance, inexperience, lack of help, poor communications. These get in the way of results. 

Tuesday, February 19, 2019

Cloud Myth: It's all the same... or It's all different...

Another round of questions....

1. What is the biggest myth business and IT departments have about the cloud?
The biggest myth organizations have about IT cloud infrastructure is that you can lift and shift your existing workloads into the cloud using the same architecture, methodology, and tools that you have used in the past. This may be true in some cases, but the reality is the architecture radically changes from one cloud to another. They all have different capabilities, and some have features that may drive business decisions from cost savings or an architectural requirement. Moving to the cloud presumes application distribution and potentially shifting trust boundaries. Risk must be managed. Visibility is required in hybrid and multi-cloud deployments along with the ability to uniformly report on and affect change across the different environments.

Planning is critical to safely and effectively migrate workloads to the cloud. The good news is that at this point there is a large body of knowledge from those that have gone before you. There are astounding successes and equally colossal failures. Work with your cloud provider closely.

2. Why is this myth so pervasive?
It's not uncommon to make assumptions based on what you have known before. However, the shift to software defined architecture creates capabilities, and in some cases limitations, that differ from traditional physical infrastructure.

3. In what way/ways does this myth hamper IT and/or business operations?
Assumptions slow the end game because you start with the wrong ideas and end up with misguided plans. A bad result is that your applications don't work as intended. Worse? The applications work, but risk is mismanaged, resulting in ineffective security or compliance controls.

Work with your cloud provider and correct your assumptions. Then create realistic strategies and a working plan. Ensure you have the proper technologies to support your applications – required to run your business – and the appropriate security and compliance controls in place – required to protect your business. Borrowing from the military, "Prior planning prevents poor performance."

4. What can be done to dispel this myth?
Learn. Attend migration workshops of the cloud providers that interest you. Preferably more than one. Build healthy budgets into the migration plans. Plan. Manage your risk.

And specifically, build budget into the plans to ensure you fully understand geographically distributed deployments and can identify risk across the distributed Data-center. The tools and methodology may change, but the end objectives of visibility and control do not change. Those are fundamental to assurance. Again. Manage your risk.

5. Is there anything else you would like to add?
Good business leaders understand and respect risk. Great business leaders learn how to manage and respond effectively to risk. The presumption is visibility. You cannot act on what you cannot see.

Thursday, February 7, 2019

Don't be Naive - Good to Great

Most people like money and opportunity. A lot. Ideally, the mission objectives and sense of purpose drive motivation, but given a similar sense of purpose and higher pay.... It happens. People move on. Getting faced with shifting slightly grey lines... Operatives working for government entities get some of the best training and are forced to learn to be resourceful. Great assets. You have to think, where do these people go in their careers? One of my closest friends is a combat mercenary. Why? Because he learned a specific skill set from the government that enables him to be effective in the physical combat theater - And he gets paid a lot of money to use this skill set. The pay? Good to Great.

Awesome reporting. Great work guys.

Wednesday, February 6, 2019

Well-Architected Cloud = Manage Your Risk. Seriously.

Manage your risk. You got a cloud? What is well-architected?

Well-Architected = Risk Managed.

This series of questions came across my desk this morning. Interestingly, it took me back nearly 10 years when I first started in cloud security. I’ve been in information security for nearly 20 years, and I believe much of the strategy and definition – the construct – of the secure environment is still the same.

How would you define a well-architected cloud?

A well-architected cloud provides assurance, or the grounds for confidence that the integrity, availability, confidentiality, and accountability have been adequately met. “Adequately met” includes (1) functionality that performs correctly, (2) sufficient protection against unintentional errors by users or software, and (3) sufficient resistance to intentional penetration or by-pass. This is a close paraphrase of the definition of assurance from NIST SP 800-27. 

What are the elements that go into a well-architected cloud?

There are three elements covering the technical implementation of a well-architected cloud. The technical controls of best practices, regulations, and standards based on CIS, NIST, PCI DSS, and others can be summed up into configuration, solutions, and design. The configuration of every application and endpoint must be configured to reduce the probability and impact of intentional and unintentional action. Hardening guides address many of these issues for the network, compute, storage, and virtualization components. System solutions provide additional insight, accountability, or control over the security and compliance of the environment. Examples include firewalls, identity and access management, and systems monitoring. Finally, given that each one of the individual components is secured as much as possible and additional solutions provide you the insight, accountability, and control over your environment, your last consideration is the environment design. This can include separating trusted and untrusted zones, implementing a DMZ, providing secure multi-tenancy, etc. 

What's the best way to achieve a well-architected cloud?

Identify the business objectives, dataflow, and preferred user interactions before building the system. Decide how subjects will interact with data objects and what controls will be in place across the reference monitor that controls authentication and authorization. The security model of the most trusted systems in the world depends upon strong access controls. This only works if you understand who has access to critical data and how to protect it using secure configurations, solutions, and design. Bottom line. Plan for it. The military has a saying, “Prior Planning Prevents Poor Performance.”

What should IT never do when constructing a cloud architecture?

Assume that you can do a direct lift and shift into the cloud. The intent of the controls have not changed, and neither have your objectives. However, the implementation, visibility, and tools available significantly change. Take the time to understand the platform you are moving your data into and how that platform functions versus how you have handled your data in the past on premises.

What are the cloud architecture pitfalls that IT might fall into?

Again. Never assume you can do a direct lift and shift into the cloud. Never embark on a journey without a known destination and plans for how to get there. Start with a comprehensive set of security requirements inclusive of configurations, solutions, and design principles that must be met. 

Is there anything else you would like to add?

Cloud environments bring tremendous opportunity, as long as you can maintain visibility and manage risk across the entire environment. That's been a consistent message on this blog since its inception in 2011. Manage Your Risk.