Thursday, March 29, 2012

The Organized Auditor: Document Tagging

Tabbles = Tagging Data
Here's an interesting concept. Tagging has been around for quite some time. There are tagged file systems for Linux, and tags for everything in social media. I started looking at ways to tag my data sets to solve - or help solve - the efficiency problem. I want to use my time thinking and processing with data. I'm a knowledge worker, and my effectiveness is tempered by how quickly I can find information. I depend on my formal and informal relationships every day because I have multiple avenues for getting answers. Tagging data provides another avenue for finding information I've already taken the time to collect. I clearly see the problem. So did a cool little company calling themselves Yellow blue soft that put up a website at Their about page says it all. "We are a small, dynamic and international team who is wondering why file-management is lagging 30 years behind and no one seems to care or even notice. We do."

Messy Data.
How do you organize your documents? Organizational Behavior was the best class I've ever taken. I had no idea how often I would refer to the simple precepts we learned to understand how data and people naturally organize themselves. One interesting aspect of this organizational problem (i.e. information organically organizing itself) is that the relational complexity of the data set exponentially grows from their interconnected geometric relationships. 

Enterprise Content Management. 
You've created and enforce the best folder hierarchies on your hard drive... and transferred them to a file share for others to also use. It makes perfect sense to you, but for some reason others feel the need to change it. Over time, it becomes unwieldy and difficult to navigate. Entering stage left, Enterprise Content Management (ECM). That's great for the enterprise. What about your personal data? Is there a way to help identify and tag relationships between data sets?

Unstructured vs. Semi-Structured vs. Structured Data.
Call it what you want. There's a wealth of information about this online. Here's a simple example of the business problem. Consider your repository of [1] compliance information organized by authorities [2] vendor information organized by company, [3] customer information organized by the internal or external customers, [4] projects organized by your internal or external projects, and [5] technology information organized by subject. This is an example of the unstructured data problem. Tagging documents facilitates reviewing content I've tagged as containing specific information regardless of where it sits in my folder structure. It's a question of efficiency. It's not that you can't manage what you have. You've hacked through it for years. The question to consider is whether you can do it more efficiently. 

Monday, March 26, 2012

CAESARS Framework Extension: Continuous Monitoring

You may be familiar with Continuous Asset Evaluation, Situational Awareness, and Risk Scoring Reference Architecture Report (CAESARS). Trolling through older draft posts I created a few months ago, I ran across this little gem. On the face of it, you might think, "cool!"... until you realize how difficult it *really* is technically to make all of this work. I personally think it's a matter of time. The market needs _something_ delivering real time feedback.

Lately I've been speaking with people about continuous monitoring using the analogy of SAP's answer to ERP. Walmart's real-time view into their supply and distribution systems are legendary. Hiccup? They're on it.

Remember the 90s? Remember the large scale SAP implementations that failed? Remember _why_ the implementations failed and how much money it cost the companies that tried? What about the ones that were able to succeed and how much SAP helped with a competitive advantage?

I believe there are lessons to be learned from those times. Remember the buzz acronym BPR? Business Process Re-engineering. Some of the challenges are technical. Some are business related. Alignment, execution, focus, scope, roles, expectations. You may ask, "Are we discussing SAP or CAESARS?" ... Yes.

Now... take a peek into the NIST IR-7756 Continuous Monitoring Framework at This is very interesting work that is moving in the direction of continuously assessing and providing assurance and remediation for your critical infrastructure. The authors of this version (Peter Mell, David Waltermire, Larry Feldman, Harold Booth, Alfred Ouyang, Zach Ragland, and Timothy McBride) have done a fantastic job of visually communicating the process and integration points.

Friday, March 23, 2012

Faster than a speeding bullet, more powerful than a locomotive, able to leap tall buildings in a single bound!

Superman's 1941 release opened famously with, "Up in the sky, look: It's a bird. It's a plane. It's Superman!" before going into the narration explaining the origins of the world's favorite superhero. I recently read through the opening narrative thinking about a baby Vblock growing up to be the superhero... Overactive imagination? Time for a vacation? :)

Several people have asked me questions about VCE's Vblock platform. Why is it any different than just buying each of the components and building it yourself? This video explains what happens behind the scenes to build a Vblock. Why does this matter to auditors? Repeatable process = Repeatable results. That's Super. 

Tuesday, March 20, 2012

EMC Acquires Pivotal Labs

Why Auditors Care: This is another step in the right direction to tackle the challenge of complex data analysis characterized by our IT infrastructures. Developers need tools to engage large disparate data sets and create solutions. My discussion is on the data produced by the infrastructure components, not the data housed by the infrastructure, which tends to be the focus of 'Big Data' discussions.

Excerpt from Full Story:
Pivotal Labs enhances EMC’s powerful portfolio of products and services, which are designed to enable organizations to store, analyze and take action on ‘Big Data’ – datasets so large they break traditional IT infrastructures.  Earlier this year EMC introduced the Greenplum Unified Analytics Platform (UAP) that delivered, for the first time, a scale-out infrastructure for analyzing both structured and unstructured data. Today EMC announced general availability of Greenplum Chorus – another industry first, delivering a Facebook-like social collaboration tool for Data Science teams to iterate on the development of datasets and ensure that useful insights are delivered to the business quickly.  EMC brought Data Science and its chief practitioner – the Data Scientist – to the fore a year ago at the world’s first Data Scientist Summit.  With the addition of Pivotal Labs, EMC can now take datasets perfected in Greenplum Chorus and enable customers to rapidly build out Big Data applications using modern programming environments such as Ruby on Rails.

News Summary:
  • EMC has acquired San Francisco-based Pivotal Labs, a privately-held provider of agile software development services and tools. 
  • EMC will invest to expand Pivotal’s reach on a global scale, bringing Pivotal’s agile consulting services expertise to an even greater number of both emerging start-ups and the world's largest businesses looking to embrace Cloud, Big Data, Social and Mobile in developing next-generation applications. 
  • Pivotal’s agile project management tool (Pivotal Tracker) is currently used by over 240,000 developers around the world. EMC plans to continue to invest in Pivotal Tracker to accelerate innovation in the platform and increase adoption. 
  • With the addition of Pivotal, EMC adds to its portfolio the gold-standard in agile software development for customers building ‘Big Data’ analytic applications. 
  • The all-cash transaction is not expected to have a material impact to EMC GAAP or non-GAAP EPS for the full 2012 fiscal year.
  • An online event titled “Social Meets Big Data: Live Webcast ” with executives from EMC and Pivotal Labs will be broadcast today, Tuesday, March 20 - 9:45 A.M. Pacific, 12:45 P.M. Eastern and 4:45 P.M GMT.  Event details can be found at  or at

Friday, March 16, 2012

PCI Practices for Protecting Management Infrastructure

[Update: Added content in a downloadable spreadsheet]

A peer asked me a question about best practices for hardening a management environment - not necessarily related to PCI. It just so happens I like the results of the PCI-DSS as a starting point to answering this question. This post covers PCI - of course - but it can also be applied to other management environments.

Controls Commensurate with Your Risk.
I've written in detail about the need to implement controls commensurate with your risk. There are several approaches, some quite lengthy and difficult to navigate. I'm not naive to the volumes of material and wide range of preferences... The objective here is to protect data, using a set of easy to navigate, understand, and effective controls. I have several friends at prominent security companies that fail to see the benefit of a structured approach to security, and instead prefer to wing it based on the latest technology and a security magazine article.

PCI just happens to care about credit card data. The management systems that connect into this sensitive environment must be protected. The PCI-SSC (the council) recognized this, and mandated controls to protect the management systems/functions connecting into a sensitive environment. Here is a compiled list of controls derived from the PCI-DSS (the PCI standard). I took some liberty to consolidate, clarify, and add brief commentary as appropriate.

Secure Management Environment Assumptions:
  • Not Internet accessible 
  • Is considered essential to protected environment
  • Does not store sensitive data (e.g. CHD, IP, ePHI, PII, etc)
  • Does not process  sensitive data (e.g. CHD, IP, ePHI, PII, etc)
  • Facility controls apply but are not discussed below.  

Original PCI Assumptions:
  • Not Internet accessible (PCI-DSSv2 Req. 1.2, 1.3, and appropriate subsections) 
  • Is part of the CDE (PCI-DSSv2 Req. 1.3 all) 
  • Does not store CHD. (PCI-DSSv2 Req. 3 and others) 
  • Does not process CHD. (PCI-DSSv2 Req. 6 and others) 
  • Facility controls apply but are not discussed below. (PCI-DSSv2 Req. 9) 

Systems fall within the scope for PCI compliance if they are used to manage the CDE. PCI scope encompasses all of the following for a merchant:
  • Primary systems: Any system, component, device, application, that processes, stores, or transmits cardholder data (CHD) 
  • Secondary systems: These systems can connect to the primary systems without going through a (specifically) Stateful Packet Inspection (SPI) firewall. 
  • Administrative systems: Includes all management tools and systems which have direct access to the primary (and peripheral) scope systems. 
Requirements Summary:
The current standard and navigation document provide additional guidance. These documents and many other supporting materials are found on the PCI Security Standards Council website:

Key Administrative and General Configuration Requirements:
Applies to All Components


Summary Details


Supply a network diagram.
Verify that a current network diagram (for example, one that shows cardholder data flows over the network) exists and that it documents all connections to cardholder data.
Enable only necessary and secure services, protocols, daemons, etc., as required for the function of the system.
Document ALL services, protocols, and ports used. Use secured technologies such as SSH, S-FTP, SSL, or IPSec VPN to protect insecure services such as NetBIOS, file-sharing, Telnet, FTP, etc. If these insecure services are not necessary for business, they should be disabled or removed.
1.1.5 (a-b),
2.2.2 (a-b)
Default passwords are not allowed.
All default passwords must be changed; every component must have a unique password.
Develop and use hardened configuration standards.
Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.
Specific requirements include (among others):
• Configure system security parameters to prevent misuse (2.2.3)
• Remove all unnecessary functionality, such as scripts, drivers, features, subsystems, file systems, and unnecessary web servers (2.2.4.a)
2.2 (a-c),
2.2.3 (a-c),
2.2.4 (a-c)
Encrypt all non-console administrative access.
Use technologies such as SSH, VPN, or SSL/TLS for web-based management and other non-console administrative access.
• Verify strong encryption for authentication (2.3.a)
• Disable Telnet, other remote login commands (2.3.b)
• Require encrypted administrative access (2.3.c)
2.3 (a-c)
Install all vendor supplied security patches, including all critical patches within one month of release.
Ensure that all system components and software have the latest vendor-supplied security patches installed. Install critical security patches within one month of release.
6.1 (all)
Restrict account creation and access rights to least privileges and job function using an automated access control system.
Limit access to system components and cardholder data to only those individuals whose job requires such access, including:
• Lease privileges (7.1.1)
• Documented roles and authorizations (7.1.2-3)
• Using an access control system (7.1.4, 7.2 all)
7.1 (all)
7.2 (all)
Provide all users with a unique user ID. Group IDs are not allowed.
All users must have a unique user ID before allowing them to access system components or cardholder data.
8.5.8 (a-c)
Authenticate all internal user IDs.
In addition to assigning a unique ID, employ at least one of the following methods to authenticate all users: something you know, have, or are.
Deploy two-factor authentication for all remote CDE access.
Incorporate two-factor authentication for remote access (network-level access originating from outside the network) to the network by employees, administrators, and third parties. (For example, remote authentication and dial- in service (RADIUS) with tokens; terminal access controller access control system (TACACS) with tokens; or other technologies that facilitate two-factor authentication.)
Note: Two-factor authentication requires that two of the three authentication methods (see Requirement 8.2 for descriptions of authentication methods) be used for authentication. Using one factor twice (for example, using two separate passwords) is not considered two-factor authentication.
Encrypt all passwords during transmission and storage.
Render all passwords unreadable during transmission and storage on all system components using strong cryptography.
Enforce strong password controls.
Ensure proper user authentication and password management for non-consumer users and administrators on all system components including:
• First time passwords unique and require immediate change (8.5.3)
• Vendor account remote access only enabled when needed and monitored during use (8.5.6.a-b)
• Do not use group, shared, or generic accounts and passwords, or other authentication methods (8.5.8.a-c)
• Passwords change every 90 days (8.5.9)
• Additional strong password controls for construction (7 char; alpha-numeric), attempts (6), lockout (30 min), idle timeout (15 min), password history (4) (8.5.10-15)
8.5 (all)
SIEM – Log management: Track and monitor all access to network resources and cardholder data to unique users.
Establish a process for linking all access to system components (especially access done with administrative privileges such as root) to each individual user, including:
• Detailed automated audit trails for all events on system components (10.2 all)
• Detailed automated audit trail entries containing all information (10.3 all)
• Audit trail protection (10.5 all)
• Daily log reviews (10.6)
• Audit trail retained for one year (10.7)
10.2 (all)
10.3 (all)
10.5 (all)
10.6 (all)
10.7 (all)
NTP – Implement time-synchronization technology.
Using time-synchronization technology, synchronize all critical system clocks and times (e.g. using NTP) and ensure that time is correct, protected, and received from industry accepted sources.
10.4 (all)
Perform quarterly vulnerability scans.
Run internal and external network vulnerability scans at least quarterly and after any significant change in the network (such as new system component installations, changes in network topology, firewall rule modifications, product upgrades).
• Review the scan reports and verify that the scan process includes rescans until passing results are obtained, or all "High" vulnerabilities as defined in PCI DSS Requirement 6.2 are resolved. (11.2.1.b)
• Internal (private) IPs must be performed by a qualified resource (e.g. formal training, experience). (11.2.1.c)
• External (public) IPs must be performed by an Approved Scanning Vendor (ASV). (11.2.2 all)
• Rescan after any significant changes. (11.2.3 all)
11.2.1 (all)
11.2.2 (all)
11.2.3 (all)
Perform annual penetration tests.
Perform external and internal penetration testing at least once a year and after any significant infrastructure or application upgrade or modification (such as an operating system upgrade, a sub-network added to the environment, or a web server added to the environment). These penetration tests must include the following:
• Network layer penetration test (11.3.1)
• Application layer penetration test (11.3.2)
11.3 (all)

Additional Key Network Requirements


Summary Details


Prohibit direct public access between the Internet and any system component in the cardholder data environment (CDE).
If the management is determined to be part of the CDE, then a DMZ bounded by SPI firewalls must be implemented to separate the management from non-CDE networks, whether internal or external. There are several requirements regarding the specific configuration of the firewall to provide specific access controls, limiting the types of acceptable connections and addresses. See the standard for additional details.
1.3 (all)
IDS/IPS – Monitor perimeter access to the CDE and critical points inside the CDE.  
IDS/IPS devices should be implemented such that they monitor inbound and outbound traffic at the perimeter of the CDE as well as at the critical points within the CDE. Critical points inside the CDE may include database servers storing cardholder data (CHD), cryptographic keys, processing networks, or other sensitive components as determined by an entity's environment and documented in their risk assessment.
11.4 (all)

Additional Key Server Requirements


Summary Details


Implement only one primary function per server.
Implement only one primary function per server to prevent functions that require different security levels from co-existing on the same server. (For example, web servers, database servers, and DNS should be implemented on separate servers.)
Note: Where virtualization technologies are in use, implement only one primary function per virtual system component.
2.2.1 (a-b)
Deploy anti-virus software on all systems commonly affected by malicious software.
Primarily applies to Windows-based operating systems.
Specific requirements include (among others):
• Software must be enabled for automatic updates and periodic scans (5.2.a-c)
• Log generation must be enabled and logs retained in accordance with PCI DSS Requirement 10.7 (5.2.d)
5.1 (all)
5.2 (all)
Deploy File Integrity Monitoring (FIM) software.
Deploy file-integrity monitoring software to alert personnel to unauthorized modification of critical system files; configuration files, or content files; and configure the software to perform critical file comparisons at least weekly. (11.5)
Use file integrity monitoring and change-detection software on logs to ensure that existing log data cannot be changed without generating alerts. (10.5.5)
Note: For file-integrity monitoring purposes, critical files are usually those that do not regularly change, but the modification of which could indicate a system compromise or risk of compromise. File-integrity monitoring products usually come pre-configured with critical files for the related operating system. Other critical files, such as those for custom applications, must be evaluated and defined by the entity.