Wednesday, February 12, 2020

15 Top-Paying IT Certifications for 2020

Here's an interesting article from Global Knowledge. Notice that the top certifications are focused on cloud, security, and risk. Importantly, both here and from some of the pundits discussing this, you'll see that in many cases multiple certifications matter, demonstrating a broad level of knowledge, and the average level of experience may be more than you think.

Take a look at the details here:
https://www.globalknowledge.com/us-en/resources/resource-library/articles/top-paying-certifications/.

Top-paying certifications:

  1. Google Certified Professional Cloud Architect — $175,761
  2. AWS Certified Solutions Architect – Associate — $149,446
  3. CISM – Certified Information Security Manager — $148,622
  4. CRISC – Certified in Risk and Information Systems Control — $146,480
  5. PMP® – Project Management Professional — $143,493
  6. CISSP – Certified Information Systems Security Professional — $141,452
  7. CISA – Certified Information Systems Auditor — $132,278
  8. AWS Certified Cloud Practitioner — $131,465
  9. VCP6-DCV: VMware Certified Professional 6 – Data Center Virtualization — $130,226
  10. ITIL® Foundation — $129,402
  11. Microsoft Certified: Azure Fundamentals — $126,653
  12. Microsoft Certified: Azure Administrator Associate — $125,993
  13. CCA-N: Citrix Certified Associate – Networking — $125,264
  14. CCNP Routing and Switching — $119,178
  15. CCP-V: Citrix Certified Professional – Virtualization — $117,069

Monday, February 10, 2020

Scaled Agile Framework®: SAFe® for Lean Enterprises

Image Credit: https://www.scaledagileframework.com

I'm a supportive fan of what Dean Leffingwell, Creator of SAFe® and Chief Methodologist at Scaled Agile, Inc., has done with Agile development. He's created an excellent body of work. Please take a minute to view his website for more information and learn how to leverage his work and lower your project risk with your own teams.

Why is this important? Let's say your strategy and technology decisions are correct. Tactical execution is difficult. Presuming your strategic direction is on target, how do you get there? Effective execution can be learned, and here's one place to start. The unfortunate reality is that poorly executed projects often result in insecure systems. Security controls are poorly implemented or left out entirely. I've often said that simply based on the paperwork alone, I get a good feel for how well the security controls are implemented. Stop taking shortcuts. Do it right the first time.

Following is from his website:

The latest version, SAFe 5.0, is built around the Seven Core Competencies of the Lean Enterprise that are critical to achieving and sustaining a competitive advantage in an increasingly digital age: 

  1. Lean-Agile Leadership – Advancing and applying Lean-Agile leadership skills that drive and sustain organizational change by empowering individuals and teams to reach their highest potential 
  2. Team and Technical Agility – Driving team Agile behaviors as well asl sound technical practices including Built-in Quality, Behavior-Driven Development (BDD), Agile testing, Test-Driven Development (TDD), and more 
  3. Agile Product Delivery – Building high-performing teams-of-teams that use design thinking and customer-centricity to provide a continuous flow of valuable products using DevOps, the Continuous Delivery Pipeline, and Release on Demand 
  4. Enterprise Solution Delivery – Building and sustaining the world’s largest software applications, networks, and cyber-physical solutions 
  5. Lean Portfolio Management – Executing portfolio vision and strategy formulation, chartering portfolios, creating the Vision, Lean budgets and Guardrails, as well as portfolio prioritization, and roadmapping 
  6. Organizational Agility – Aligning strategy and execution by applying Lean and systems thinking approaches to strategy and investment funding, Agile portfolio operations, and governance 
  7. Continuous Learning Culture – Continually increasing knowledge, competence, and performance by becoming a learning organization committed to relentless improvement and innovation

Tuesday, January 28, 2020

Resources Serving Veterans

More than 900 hours of free cybersecurity training for any government employee or veteran. Awesome.

The barrier to learning has been lowered to a decision of whether you want to do it or not. Go do it. Make it happen. Make a decision to improve the rest of your life today.

"The Federal Virtual Training Environment (FedVTE) provides free online cybersecurity training to federal, state, local, tribal, and territorial government employees, federal contractors, and US military veterans."
Also, fellow veterans... check out:

Friday, January 24, 2020

Homage to the Legends. Reference Monitor - October 1972 - James P. Anderson

Recently, I was asked to speak to a group of middle school students. One of the questions that I asked them was, "Why is it possible to build ever-increasingly complex software and systems?" The answer is simple. Many have gone before us to build the foundations and think through complex problems. We get to benefit from their knowledge and accelerate our learning.

Occasionally, there are special luminaries that either by sheer luck or sheer genius create breakthroughs in thinking that everyone else gets to benefit from.

James P. Anderson is responsible for pushing the understanding and usefulness of several powerful and simple constructs in computer security.

I continue to hear much of his material regurgitated as original thought. It's fine, insofar as somebody really does have an original thought. It's interesting to me just how relevant his research and ideas are today, nearly half a century after his original content was published.
NIST: http://csrc.nist.gov/publications/history/ande72a.pdf 

Understanding the History
(text below is from the Orange Book.)
Trusted Computer System Evaluation Criteria
https://fas.org/irp/nsa/rainbow/std001.htm
DoD 5200.28-STD
Library No. S225,7ll
DEPARTMENT OF DEFENSE STANDARD

Historical Perspective
In October 1967, a task force was assembled under the auspices of the Defense Science Board to address computer security safeguards that would protect classified information in remote-access, resource-sharing computer systems. The Task Force report, "Security Controls for Computer Systems," published in February 1970, made a number of policy and technical recommendations on actions to be taken to reduce the threat of compromise of classified information processed on remote-access computer systems.[34]  Department of Defense Directive 5200.28 and its accompanying manual DoD 5200.28-M, published in 1972 and 1973 respectively, responded to one of these recommendations by establishing uniform DoD policy, security requirements, administrative controls, and technical measures to protect classified information processed by DoD computer systems.[8;9]  Research and development work undertaken by the Air Force, Advanced Research Projects Agency, and other defense agencies in the early and mid 70's developed and demonstrated solution approaches for the technical problems associated with controlling the flow of information in resource and information sharing computer systems.[1]  The DoD Computer Security Initiative was started in 1977 under the auspices of the Under Secretary of Defense for Research and Engineering to focus DoD efforts addressing computer security issues.[33]

Concurrent with DoD efforts to address computer security issues, work was begun under the leadership of the National Bureau of Standards (NBS) to define problems and solutions for building, evaluating, and auditing secure computer systems.[17]  As part of this work NBS held two invitational workshops on the subject of audit and evaluation of computer security.[20;28]  The first was held in March 1977, and the second in November of 1978.  One of the products of the second workshop was a definitive paper on the problems related to providing criteria for the evaluation of technical computer security effectiveness.[20]  As an outgrowth of recommendations from this report, and in support of the DoD Computer Security Initiative, the MITRE Corporation began work on a set of computer security evaluation criteria that could be used to assess the degree of trust one could place in a computer system to protect classified data.[24;25;31]  The preliminary concepts for computer security evaluation were defined and expanded upon at invitational workshops and symposia whose participants represented computer security expertise drawn from industry and academia in addition to the government.  Their work has since been subjected to much peer review and constructive technical criticism from the DoD, industrial research and development organizations, universities, and computer manufacturers.

The DoD Computer Security Center (the Center) was formed in January 1981 to staff and expand on the work started by the DoD Computer Security Initiative.[15]  A major goal of the Center as given in its DoD Charter is to encourage the widespread availability of trusted computer systems for use by those who process classified or other sensitive information.[10]  The criteria presented in this document have evolved from the earlier NBS and MITRE evaluation material.

\\ Here is one of the more powerful security constructs that I use on a regular basis as a simple model that can be explained to others. I first learned this in 1999 as part of my CISSP studies.

6.1  THE REFERENCE MONITOR CONCEPT
In October of 1972, the Computer Security Technology Planning Study, conducted by James P.  Anderson & Co., produced a report for the Electronic Systems Division (ESD) of the United States Air Force.[1]  In that report, the concept of "a reference monitor which enforces the authorized access relationships between subjects and objects of a system" was introduced.  The reference monitor concept was found to be an essential element of any system that would provide multilevel secure computing facilities and controls.

The Anderson report went on to define the reference validation mechanism as "an implementation of the reference monitor concept .  .  .  that validates each reference to data or programs by any user (program) against a list of authorized types of reference for that user." It then listed the three design requirements that must be met by a reference validation mechanism:
     a. The reference validation mechanism must be tamper proof.
     b. The reference validation mechanism must always be invoked.
     c. The reference validation mechanism must be small enough to be subject to analysis and tests, the completeness of which can be assured."[1]

Extensive peer review and continuing research and development activities have sustained the validity of the Anderson Committee's findings.  Early examples of the reference validation mechanism were known as security kernels.  The Anderson Report described the security kernel as "that combination of hardware and software which implements the reference monitor concept."[1]  In this vein, it will be noted that the security kernel must support the three reference monitor requirements listed above.