Security Development Lifecycle - Improving Security through sound Development Processes

Security Development Lifecycle

Improving Security through sound Development Processes

Marc Ruef
by Marc Ruef
time to read: 7 minutes

The acronym SDLC customarily stands for Systems Development Lifecycle, something that is established in order to develop, expand, and operate systems professionally. Efficiency and reliability need to be optimized and maintained throughout the various phases of the lifecycle. At the same time, however, this makes it possible to consistently provide for security. This article discusses the concept of the security development lifecycle, which focuses on the security aspects of specific solutions.

SDLC

In its most basic form, SDLC is a cycle that consists of five constantly repeating phases:

  1. Planning
  2. Analysis
  3. Design
  4. Implementation
  5. Maintenance

The cycle ensures that planning is always undertaken for the next iteration after entering the maintenance phase. SDLC therefore takes into account the traditional notion of information security so enduringly preached by the American cryptographer Bruce Schneier:

Security is a process, not a product.

And it is indeed true that security cannot merely be created. Instead, it is an ongoing endeavor. Internal and external factors may require constantly having to restore secure conditions and to manage them.

Security Development Lifecycle

The Security Development Lifecycle is designed to help in the planning, implementation, and management of systems in a sound and reasonable way. The following seven phases are established to this end (based on the approach promoted by Microsoft):

  1. Training
  2. Requirements
  3. Design
  4. Implementation
  5. Verification
  6. Release
  7. Response

These are essentially the same original phases of “classic” SDLC. However, they are expanded to emphasize the security aspect.

Designing and implementing requirements

The security development lifecycle does, in fact, revolve completely around security requirements. These become clear in Phase 2: Requirements, which addresses the following questions, for example:

No. Question Example
1 What user groups should have access to specific components? Internal users with administrative privileges
2 How should the authentication process be designed? Strict authentication (username, password, token)
3 What are the availability requirements? High availability (99.7%)

In Phase 3: Design, these general requirements are then incorporated into the actual design of the solution. Question 1, for example, requires a role and authorization concept; question 2 requires an access and authorization concept, and question 3 must address the issues of availability, backup/restore, and business continuity management (BCM).

If all the requirements can be incorporated into the design, they must then be implemented in the organization, concept, and technical aspects in Phase 4: Implementation.

Verification of the implementation

The security development lifecycle introduces an important tool in Phase 5: Verification to ensure security requirements are not just discussed but are actually implemented as well. This is because this step involves verifying whether the previously defined and implemented security mechanisms have actually been implemented. This involves comparing the postulations from the preceding phases with the current state and documenting deviations. Traditional security testing tools can be used for this purpose, for example:

There is no sweeping definition that sets out which testing approach should be used and to what extent. The aim of any analysis is to unify reliability and efficiency. For example, it is preferable to use the source code of a solution, if available. Otherwise, more time-consuming reverse engineering can be undertaken or dynamic tests (e.g. vulnerability scanning and penetration testing) can be used as an alternative.

Here it is important to understand that every test mechanism may have different requirements and address different aspects. Restricting verification to source code analysis is just as risky as relying exclusively on a network-based vulnerability scan. Technical understanding, experience, and good instincts are required here.

Patches, releases, and vulnerabilities

Changes to aspects relating to security technology may be required both in the verification phase and in managing the established solution. There are two reasons for this:

To ensure that these aspects do not go unnoticed and are addressed promptly, the current threat situation must be reassessed at regular intervals within the context of risk management. The market for attack tools (techniques, tools and exploits) must be kept in mind and the appropriate changes made.

Vulnerability management is in place to identify, document, and address weaknesses in the products used. On the one hand, various sources have to be managed here (e.g. vulnerability databases, mailing lists, social media) in order to identify newly published vulnerabilities as quickly as possible. On the other hand, active programs such as bug bounties can be set up with some additional effort. The reporting of newly discovered vulnerabilities by third parties is rewarded, which promotes dialogue and shrinks the time window for malicious activities.

Additionally, patch and release management eliminates the vulnerabilities that have been addressed by means of bug fixes and new versions. The preparation, testing, rollout, and verification of these optimizations must also be handled professionally.

Conclusion

The security development lifecycle is a powerful tool for establishing security and putting security measures into practice. By ensuring that security requirements are addressed throughout the entire lifecycle of a product, it is possible to attain and maintain a good standard and to verify it as well.

This level of professionalization is unquestionably necessary in this day and age. Without SDLC, the security of a solution relies mainly on good luck. And the clock is always ticking until it finally runs out.

About the Author

Marc Ruef

Marc Ruef has been working in information security since the late 1990s. He is well-known for his many publications and books. The last one called The Art of Penetration Testing is discussing security testing in detail. He is a lecturer at several faculties, like ETH, HWZ, HSLU and IKF. (ORCID 0000-0002-1328-6357)

Links

You want to test the security of your firewall?

Our experts will get in contact with you!

×
Specific Criticism of CVSS4

Specific Criticism of CVSS4

Marc Ruef

scip Cybersecurity Forecast

scip Cybersecurity Forecast

Marc Ruef

Voice Authentication

Voice Authentication

Marc Ruef

Bug Bounty

Bug Bounty

Marc Ruef

You want more?

Further articles available here

You need support in such a project?

Our experts will get in contact with you!

You want more?

Further articles available here