How to start an assessment.

An Application Security Assessment is started by examining the security practices of a solution in its SDLC ( Software Development Lifecycle ). The key areas are examined are broken down into 4 domains of Governance, Intelligence, SSDL Touch-points, and Deployment. In those domains Strategy & Metrics, Compliance & Policy, Training, Attack Models, Security Features & Design, Standards & Requirements, Architecture Analysis, Code Review, Security Testing, Penetration Testing, Software Environment, and Configuration Management & Vulnerability Management are examined.

Specifically a program is examined to ensure that there are at least Security minimums in key areas that are in place (these are my view on minimums): Training, Architectural Analysis, Code Review, Penetration Testing, and Privacy. These are the standard starting point for any assessment in Application Security.

Within the areas listed above they are looked at to understand the maturity of each. It is important to look at how a security activity is offered, performed, and any external effect it has on the lifecycle. No security practice is an island unto itself. They all work holistically in a SDLC lifecycle eco-system.

An overview into the core security practices and maturity:

Training maturity- Software Security training and awareness promote a culture of software security throughout the organization.

  • Level 1 – Does the organization make customized, role-based training available to their employees?
  • Level 2 – Does the organization create satellite groups within development teams that promote security practices?
  • Level 3 – Is the security culture promoted externally with vendors and outsourced contractors? Is recognition given and advancement provided in the training curriculum?

 

Architectural Analysis maturity – Perform a security feature review and get started with Architectural Analysis.

  • Level 1 – Are risk driven architectural reviews done? Does the organization provide a lightweight risk classification?
  • Level 2 – Is there an architectural analysis process based on common architectural descriptions and attack models?
  • Level 3 – Do software architects lead efforts across the organization to lead analysis efforts and have standard secure architectural patterns they use and provide?

 

Code Review maturity – Use manual code analysis review along-side automation. Use automated tools to drive efficiency and consistency .

  • Level 1 – Is manual or automated code review being done with centralized reporting? Is code review mandatory for all software projects? Are findings folded back into  strategy and training?
  • Level 2 – Do automated tools and tool mentors enforce coding standard behaviors in development teams?
  • Level 3 – Has an automated code review factory been built to find bugs in the entire code-base?

 

Penetration Testing maturity – Use Penetration Testers to find problems.

  • Level 1 – Are internal or external penetration testers being used? Are the deficiencies being discovered and addressed? Is everyone is being made aware of progress?
  • Level 2 – Are periodic penetration tests being performed for all applications?
  • Level 3 – Is penetration testing knowledge being kept in pace with attack advances of attackers?

 

Privacy – Identify PII obligations and promote privacy.

  • Level 1 – Are statutory, regulatory, and contractual compliance drivers understood and available to all lifecycle stakeholders?
  • Level 2 – Do SLAs address the software security properties of vendor software deliverables? Is this backed by executive support? Do risk managers take responsibility for software risk?
  • Level 3 – Does data gathered from attacks, threats, defects, and operational issues drive policy? Are policies evolving? Do the demands upon the vendors change because of this?

 

Outside the core security practices other security practices that are examined are one’s that are common across the Industry.

 

Security & Metrics – What SDLC is being used and what gates are enforced?

  • Level 1 – Does everyone who is involved with the software lifecycle understand the written organization security objectives? Is there demonstrated support from executive level on these efforts?
  • Level 2 – Are there individuals that are responsible for the successful performance of secure lifecycle activities? Are activities that lead to unacceptable risk removed and replaced?
  • Level 3 – Is risk-based portfolio being managed?

 

Attack Models -Create a data classification scheme and inventory. Prioritize applications by data consumed and data manipulated.

  • Level 1 – Is there a knowledge-base built up around attacks and attack data? This includes attacks that have already occurred and attacks that are of concern. Is there a data classification scheme that is used to inventory and prioritize applications?
  • Level 2 – Does a security team offer assistance on attackers and relevant attacks?
  • Level 3 – Is attack research being done? Is this knowledge being provided to auditors?

 

Security Features & Design – Build and track a common library of security features for re-use.

  • Level 1 – Are architects and developers being provided guidance around security features? Are security features and secure architecture published?
  • Level 2 – Are secure-by-design frameworks being provided to lifecycle teams?
  • Level 3 -Are defined security features being used across the organization? Do teams understand design choices?

 

Standards & Requirements – Create security standards.

  • Level 1 – Is security being kept up-to-date and made available to everyone in the organization? Are these easily accessible. Artifacts included as a minimum are: security standards, coding standards, and compliance requirements.
  • Level 2 – Are formally approved standards communicated internally and to vendors? Are SLAs being enforced? Is usage of open source software understood?
  • Level 3 – Is open source software being held to the same standard as the organization?

 

Security Testing – Drive tests with security requirements and security features.

  • Level 1 – Does QA perform functional security testing?
  • Level 2 – Has QA included black-box testing tools in their processes?
  • Level 3 – Does QA include security testing in an automated regression suites? Does security testing follow an attackers perspective?

 

Software Environment – Host and network security basics are in place.

  • Level 1 – Operation group ensures that required security controls are in place and the integrity of these controls are kept in tacked? Is monitoring used that includes application input?
  • Level 2 – Are application installation and maintenance guides created for operations teams? Is code signing being used?
  • Level 3 – Is client-side code protected when leaving the organization? Is software behavior being monitored?

 

Configuration Management & Vulnerability Management – Use Operations data to change development behavior.

  • Level 1 – Do results from CM and VM drive development behavior? Is there an Incident Response program in place?
  • Level 2 – Is there emergency response available during application attacks?
  • Level 3 – Is there a tight response loop between operations and development of deficiencies found in ops and are enhancements made in the application that eliminate root-cause?

SanFrancisco

 

 Adventures in Code

As a cyber-professional we are tasked with being protectors of software and infrastructure on the Internet. We protect ourselves from unknown adversaries. We place devices and services to monitor and filter our experience and to protect us.

Our continued effort has strengthened the armor of our networks but we need to also observe our software. We need insight into the level of security practices that we observe in our software development lifecycles – right where the software is made. A root-cause security issue can often start with the code.

Code is telling our machines what to do. It runs our social, our public, our business, and our private lives. We display our social lives through Facebook, Twitter, and Instagram. Life in general produces so many public documents that they are just a click-away and our private and business life are absorbed into activities on the Web, Mobile, and Cloud.

Our software runs in hostile environments. The introduction of Mobile and Cloud architectural changes over the last few years has reinforced the need for software to be protected at its source. Countless studies have shown that its a strong strategic move to address the security gaps found at this point of the software lifecycle.

So, we always Move Left.

So the rule is: “We need to protect our software at its source and move left”.

Move Left is a movement to protect software holistically along the lifecycle by addressing security practices and managing risk. All area are reviewed and Domains and Security Practices are matured. Even down to where the code is composed by Moving left. The long-term strategy is to mature practices by tactically refreshing SDLC security practices along the lifecycle.

A non-standard SDLC may not be bad. But will need to be reviewed on its merits for its security practices.

Education, Architecture, Code, Data-flow and rest, and testing all have structure and can be understood.

There is some practical knowledge that should be observed at this point that a well-rounded software security program helps protect the software.

Metrics create a compare and contrast ability to measure your efforts against yourself, against other organizations, and against Industry and Sector(ISV, Retail,..). This provides insight into the gaps in your software security initiative.

Security minimums

Security minimums (training, architectural analysis, code review, penetration testing, and privacy) are the standard starting point for any effort: [T1.1] [AA1.1] [CR1.4] [PT1.1] [CP1.2]

Review: http://bsimm.com/download/BSIMM-V.pdf

There is an unsettling reality that there are a lot of attacks that are done out of ‘matter of convenience’. Absent Security practices make for holes in software. Convenience attacks are simple attacks that allow entry into the privacy, data, and systems of others.

There is a set of common key areas that are practiced throughout the industry. Not all SDLCs and programs will look exactly alike. But, a good security program is baked though-out the SDLC into a Move-Left mindset by maturing the security practices iteratively.

Simply, good security is baked into the environment. Solid software security practices are being observed that are incorporated into the level and skill of the team. Adopted security practices will mature the team and their lifecycle by reducing a root-cause issue, the code.

Industry Common Areas:

[SM1.4] – What SDLC is being used and what gates are enforced?
[AM1.2] – Create a data classification scheme and inventory. Prioritize applications by data consumed and data manipulated.
[AA1.1] – Perform a security feature review and get started with Architectural Analysis.
[PT1.1] – Use Penetration Testers to find problems
[CP1.2] – Identify PII obligations and promote privacy
[SFD1.1] – Build and track a common library of security features for re-use.
[CR1.4] -Use manual code analysis review along-side automation.  Use automated tools to drive efficiency and consistency .
[SE1.2] – Host and network security basics are in place.
[T1.1] – Provide security awareness training. Promote culture of security throughout the organization.
[SR1.1] – Create security standards
[ST1.3] – Drive tests with security requirements and security features
[CMVM1.2] – Use Operations data to change development behavior

Review: http://bsimm.com/download/BSIMM-V.pdf

About Software. How Software moves.

Software travels. From the product that you buy or the software you create. From the cloud it lives on, to the designers that “think it up” and the developers that created it. Software lives and moves through a birth and death lifecycle. Software is defined by this lifecycle. It is the journey that it traverses that is known as the Software Development Lifecycle(sdlc).

As a security practitioner, consultant, etc…, Security Practices and Controls are assessed for and are applied with a security-first mindset. Practices are reviewed and matured. Then the cycle repeats.

When considering Software Security, the approach should also include a software security-maturity mindset. Knowing that all applications are at some level of risk. Knowing that practices can be measured and improved. Knowing that simply some teams do not understand the strategic aggregate of their security practices or have a clue about a future roadmap for improving them.

With uneducated teams creating bad software and skilled teams looking for continuous improvement, the B-SIMM can help. (Build Security In Maturity Model by Cigital)

The B-SIMM is a collection of all identified security practices that are found in any Software Security Initiative, across Industries and Sectors (ISV, Retail, Financial, …). These practices result in reduction of Application Risk and measure the level of maturity in any Software Initiative. The power of this meta-data allows you to compare and contrast your security practices against yourself and other industry leaders across those Industries and Sectors.

When applied through the lens of an assessor, programs can be evaluated on the set of security practices used and how properly applied they are providing an understanding of the level of risk taken when a Product or Service is purchased or whenever Software is involved.

A well run SDLC with software security practices applied, demonstrate how well security is built in to the practice of building and delivering Software. Not all SDLCs will look the same but it is the rigor and skill-set of the program that can provide a view of its strengths and the evolution of its maturity.

So what do the teams and organizations do with code that it creates for their applications and services? The following posts will provide the detail on how to discover that.

 

Shattered Skies: The start of a well rounded security initiative

A well rounded security initiative has many security practices at a varying maturities. B-SIMM describes these practices, all in apart of a multi-year effort by the Cigital company to capture the essence of a good security lifecycle from different sectors in the industry (IV, Financial, Retail, etc…). The official list of companies are managed by Cigital.  The metadata they provide is extremely useful to everyone participating in the area of lifecycle security.

In any organization that develops, purchases, or has a standing infrastructure for software design, build, and delivery will find these practices inherent in their program because they are the security practices you find in any practice. They show the maturity in “what you are currently doing” with those practices and “where you are heading” with them.

The B-SIMM is broken down into four domains: Governance, Intelligence, SSDL Touchpoints, and Deployment.  All the  security practices fall into one of those four domains as a sub-category of a domain. There are 12 sub-categories that hold 112 distinct practices at present. The security practice areas are shown below in a spider-chart.

BSIMM_Spider_graph

An  example of Security Practice Spider chart from an article by Gary McGraw at Cigital, http://www.cigital.com/presentations/mco2014030081.pdf

 A spider-chart provides a quick dashboard level view of the maturity of those 12 security practice areas. Allowing for a quick contrast and compare across the current efforts in a  software security initiative and how it compares to the general industry and industry sectors. A “high water mark” line allows for contrast measurements to be taken with lower-resolution security programs.

The 12 Security Practice areas are set across the four domains as below:

Governance Intelligence SSDL Touchpoints Deployment
Strategy and Metrics Attack models Architectural Analysis Penetration Testing
Compliance and Policy Security features and Design Code Review Software Environment
Training Standards and Requirements Security Testing Configuration Management and Vulnerability Management

Now, a secure Software Development Life-Cycle (SDLC)  is needed to put those security practices into action.