Monday, November 23, 2015

State Security Model Flaws – or… Assumptions

This is meant to be a short, simple post. Just capturing interesting discussion from our class the other night. The state security model is really simple, representing an easy way to describe the importance of governance. Provision, configure, validate to a known good state. Monitors state deviations. Known state deviations are good and mean that you are still in a known good state. Unknown state deviations must be investigated (response) to determine whether it is a new known state deviation or an incident.

However, some really good points were brought up during class. There's a lot of assumptions on external factors introducing the errors from the way that several people have presented and discussed the model. We learned this model in the military, and the source of the deviation could be assumed to be internal or external. It didn't matter.

The reality is that your definition of a known good state may or may not be absolute (100%). Your visibility into the system is almost certainly not absolute. Examples include running firmware, existing compromise, individual configurations, account access, authenticator systems, cipher code/implementation/system, source code, system interoperability/API configurations/capabilities/hidden capabilities...

If we presume that assurance of a known good state is based upon a selection of points that you can validate, then how many points are good enough to provide assurance of a known good state? What about the periodicity? Is there anything here we need to consider?

Systems themselves could *possibly* introduce errors within the bounds of allowed operations. People most certainly can. Component visibility can inhibit your view/understanding of actual state. Your view/understanding of actual state can change between timed points of inspection. External and/or internal actors may identify vulnerabilities in code and exploit that within the bounds of your controls.

The point of the discussion is that models can be very powerful, and certainly helpful for understanding systems through a particular lens. Think outside the box. Don't be afraid to ask questions. The person who initiated the discussion is perhaps one of the least technical people in the class, just asking innocent questions.

How do you address the concerns? Short answer. Build the system with enough introspection and visibility into critical processes/configurations that overcomes your risk tolerance. Make the assumption that the system is already compromised and build it so that you can identify an existing compromise to the extent possible. Strong emphasis on access controls, not repudiated auditing, visibility into communications including who/what/where/why/how/volume.