← Patterns / SP-012

Secure Software Development Lifecycle

The Secure Software Development Lifecycle (SSDL) integrates security activities into every phase of software development, from requirements through design, implementation, testing, release, and maintenance. Its purpose is to identify and remediate security weaknesses as early as possible, where the cost of fixing them is lowest and the risk of shipping them to production is eliminated. The economics of secure development are compelling but widely misunderstood. Fixing a vulnerability found in design costs a fraction of fixing the same vulnerability found in production. Yet most organisations still rely on penetration testing -- the last and most expensive phase -- as their primary security assurance mechanism. A mature SSDL shifts the majority of security activity left, into the phases where developers are already working, making security a natural part of development rather than an adversarial gate at the end. The SSDL has six phases. Requirements and Design establishes security requirements derived from data classification, regulatory obligations, and threat context, then uses threat modelling to identify architectural weaknesses before any code is written. Implementation applies secure coding standards and uses IDE-integrated tools to catch vulnerabilities as developers type. Verification employs automated analysis -- static analysis (SAST), dynamic analysis (DAST), software composition analysis (SCA), and secrets scanning -- integrated into CI/CD pipelines so that insecure code cannot merge. Testing includes security-specific testing: abuse cases, authentication bypass attempts, injection testing, and authorization boundary validation. Release applies final security gates: compliance checks, vulnerability thresholds, licence compliance for dependencies, and sign-off from the security function. Maintenance covers vulnerability management for deployed software: patching, dependency updates, and responding to newly discovered vulnerabilities in components. Two organisational models make SSDL work. The centralised model has a dedicated AppSec team that performs threat models, code reviews, and penetration tests -- this scales to about one AppSec engineer per 10-15 developers before becoming a bottleneck. The distributed model uses security champions embedded in development teams, supported by a smaller central AppSec team that provides tooling, training, and escalation -- this scales much further and builds security culture. Most mature organisations use a hybrid: security champions for day-to-day, central AppSec for high-risk reviews and tooling. The relationship between SP-012 (Secure SDLC) and SP-028 (Secure DevOps Pipeline) is complementary: SP-028 focuses on the toolchain and automation, SP-012 focuses on the human process, governance, and security activities that the toolchain supports. Together they provide comprehensive coverage of modern software delivery security.
Release: 26.02 Authors: Aurelius, Vitruvius Updated: 2026-02-07
Assess
ATT&CK This pattern addresses 439 techniques across 13 tactics View on ATT&CK Matrix →
GOVERNANCE, STANDARDS & TRAINING SA-15 SA-08 AT-03 | SR-03 RA-05 SI-02 | CA-08 CM-03 SECURE SOFTWARE DEVELOPMENT LIFECYCLE REQUIREMENTS & DESIGN Threat Modelling Security Requirements RA-03 SA-08 Risk Assessment Security Engineering 1 IMPLEMENTATION </> Secure Coding Peer Review SA-08 SR-03 Secure Coding Standards Supply Chain Provenance 2 VERIFICATION SAST / SCA Code Analysis SA-11 SA-15 Developer Testing Dev Tools & Methods 3 TESTING DAST / Pen Testing Security Testing CA-08 SA-11 Penetration Testing Developer Testing 4 RELEASE Security Gate Change Control CM-03 SA-15 Change Control Dev Tools & Methods 5 MAINTENANCE Vulnerability Monitoring & Patching RA-05 SI-02 Vulnerability Scanning Flaw Remediation 6 Continuous Feedback SECURITY CHAMPIONS PROGRAMME AT-03 Spans Implementation, Verification, Testing & Release KEY SECURITY METRICS Mean Time to Remediate (MTTR) Vulnerability patching Defect Escape Rate Vulns found post-release Security Gate Pass Rate Release approval % Dependency Risk Score SCA vulnerability count KEY CONTROL FAMILIES SA — System & Services Acquisition Secure coding, developer testing, tools RA — Risk Assessment Threat modelling, vulnerability scanning CM — Configuration Management Change control, baselines SI — System & Info Integrity Flaw remediation, patching CA — Assessment & Authorization Penetration testing SR — Supply Chain Risk Mgmt Third-party component provenance SP-012 Secure Software Development Lifecycle OWASP SAMM — owaspsamm.org opensecurityarchitecture.org

Click any control badge to view its details. Download SVG

Key Control Areas

  • Threat Modelling and Secure Design (SA-08, SA-11, SA-15, PL-08, RA-03): Security by design starts before code. SA-08 establishes security and privacy engineering principles: defence in depth, least privilege, fail-secure defaults, separation of duties, and minimising attack surface. These principles guide architectural decisions and provide the criteria against which designs are evaluated. SA-11 mandates developer testing and evaluation including security testing: threat modelling during design reviews using STRIDE, attack trees, or equivalent methodologies to identify threats at the architectural level. SA-15 defines the development process standards: when threat models must be produced (new services, major changes, new data flows), who participates (developer, architect, security champion), and what the outputs are (threat register, mitigations mapped to controls). PL-08 establishes the security architecture: reference architectures for common patterns (web application, API service, event-driven, microservices) that embed security controls by default. RA-03 performs risk assessment at the design phase: data classification determines security requirements, regulatory context determines compliance controls, and threat context determines the depth of security review required.
  • Secure Coding Standards and Developer Training (SA-08, SA-11, AT-03, SA-16, CM-04): Developers are the first line of defence. SA-08 defines secure coding standards: language-specific guidelines (OWASP Secure Coding Practices, SEI CERT standards) covering input validation, output encoding, authentication, session management, access control, cryptography, error handling, and logging. SA-11 requires that developers test their own code for security: unit tests that cover security-relevant behaviour (authentication bypass, authorization boundary, input validation edge cases), not just functional requirements. AT-03 provides role-based security training: annual secure coding training for all developers, advanced training for security champions, and just-in-time training triggered by common vulnerability findings in code reviews. SA-16 provides developer security resources: internal secure coding guidelines, approved cryptographic libraries, authentication/authorization frameworks, and reusable security components that make the secure way the easy way. CM-04 analyses security impact of changes: security-relevant code changes (authentication, authorization, cryptography, input handling) require review by a security champion or AppSec engineer before merge.
  • Static Analysis and Code Review (SA-11, CM-04, SA-15, RA-05, SI-10): Automated analysis catches what humans miss. SA-11 mandates static application security testing (SAST): tools integrated into CI/CD that scan source code for vulnerability patterns -- injection flaws, XSS, insecure deserialization, hardcoded credentials, and cryptographic misuse. SAST runs on every pull request, blocking merge when high-severity findings are confirmed. CM-04 requires security review of changes: pull request reviews include security considerations, and changes to security-sensitive code paths trigger mandatory review by a security champion. SA-15 defines SAST tool configuration: rule sets tuned to reduce false positives (the fastest way to make developers ignore security tooling), custom rules for organisation-specific patterns, and severity thresholds that distinguish blocking findings from advisory findings. RA-05 scans for vulnerabilities: SAST is one input to vulnerability management, with findings tracked, triaged, and remediated within defined SLAs. SI-10 validates information input: static analysis rules specifically targeting input validation weaknesses, ensuring that all external input is validated before use.
  • Software Composition Analysis and Supply Chain (SA-11, SR-03, SR-04, SA-04, RA-05): Most code in modern applications is third-party. SA-11 includes software composition analysis (SCA): tools that scan dependency manifests (package.json, requirements.txt, pom.xml, go.mod) to identify known vulnerabilities in third-party libraries. SR-03 establishes supply chain controls: approved package registries, dependency pinning, integrity verification (checksums, signatures), and policies for acceptable licence types. SR-04 provides supply chain provenance: tracking where dependencies come from, verifying publisher identity, and monitoring for dependency confusion or typosquatting attacks. SA-04 governs the acquisition process: security requirements for third-party components including vulnerability disclosure policies, patch SLAs, and end-of-life commitments. RA-05 monitors for new vulnerabilities: SCA runs continuously (not just at build time) to detect newly disclosed CVEs in deployed dependencies, triggering patching workflows for critical findings. Secrets scanning is a related capability: detecting API keys, passwords, and tokens accidentally committed to source repositories, with pre-commit hooks as the first line and CI scanning as the backstop.
  • Dynamic Analysis and Security Testing (SA-11, CA-08, RA-05, SI-06, SC-07): Test the running application. SA-11 mandates dynamic application security testing (DAST): automated scanning of deployed applications to identify runtime vulnerabilities -- injection points, authentication weaknesses, insecure headers, TLS configuration issues, and information disclosure. DAST runs against staging environments as part of the release pipeline. CA-08 conducts penetration testing: manual security testing for high-risk applications, performed by internal AppSec engineers or external specialists, targeting business logic flaws and complex attack chains that automated tools miss. RA-05 integrates dynamic findings into vulnerability management: DAST and penetration test findings are tracked alongside SAST and SCA findings with consistent severity classification and SLAs. SI-06 verifies functional security: testing that security controls work as designed -- authentication cannot be bypassed, authorization boundaries hold, rate limiting functions, and audit logging captures all security-relevant events. SC-07 tests boundary protection: verifying that network-level security controls (WAF rules, API gateway policies, CORS configuration) function correctly in the deployed environment.
  • Security Gates and Release Management (SA-11, CM-02, CM-03, SA-15, CA-02): Nothing ships without passing security checks. SA-11 defines security gate criteria for release: maximum acceptable vulnerability counts by severity (zero critical, limited high, tracked medium), SCA licence compliance, secrets scanning clean, and DAST baseline pass. CM-02 establishes secure baseline configurations for deployment: hardened container images, secure default configurations, and infrastructure-as-code templates that embed security controls. CM-03 governs configuration change control: release processes that include security sign-off for changes affecting security posture, with change classification determining the level of review required. SA-15 defines the development process security requirements: what evidence must be produced at each phase (threat model, SAST results, SCA report, DAST results, penetration test report) to demonstrate due diligence. CA-02 assesses control effectiveness: regular evaluation of whether security gates are functioning (are findings being detected, are they being remediated, are SLAs being met) with metrics reported to security governance.
  • Vulnerability Management and Maintenance (RA-05, SI-02, SA-11, CM-04, IR-06): Security doesn't stop at release. RA-05 mandates ongoing vulnerability monitoring: continuous scanning of deployed applications and their dependencies for newly discovered vulnerabilities, with automated alerting and triage workflows. SI-02 governs flaw remediation: defined SLAs for patching based on severity (critical: 24-48 hours, high: 7 days, medium: 30 days, low: 90 days), with exception processes for cases where immediate patching isn't feasible. SA-11 extends to maintenance: regression testing after patches to confirm fixes and ensure no new vulnerabilities are introduced. CM-04 analyses security impact of patches: ensuring that security patches don't break functionality and that dependency updates don't introduce new vulnerabilities. IR-06 covers vulnerability disclosure: processes for receiving vulnerability reports from external researchers, triaging them, coordinating fixes, and issuing advisories. Bug bounty programmes complement automated scanning by incentivising external researchers to find vulnerabilities that tools miss.

When to Use

This pattern applies to every organisation that develops or commissions software. It is particularly critical for: organisations developing internet-facing applications, SaaS providers processing customer data, financial services firms subject to DORA's ICT risk management requirements, organisations subject to PCI DSS requirement 6 (develop and maintain secure systems), healthcare organisations developing systems that process patient data, organisations with regulatory obligations around software security (FCA, SEC, PRA), any organisation that has experienced a security incident caused by a software vulnerability, and organisations adopting DevOps or CI/CD who want to embed security from the start rather than retrofit it.

When NOT to Use

Organisations that exclusively use SaaS products and do no custom development do not need a full SSDL -- but they should still assess the SSDL practices of their SaaS vendors (see SA-04). Very small development teams (1-3 developers) should prioritise automated tooling (SAST, SCA, secrets scanning) over process-heavy approaches like formal threat modelling -- a lightweight security checklist at design time may suffice. Organisations should not attempt to implement all SSDL practices simultaneously: start with the highest-value automated checks (SCA for known CVEs, secrets scanning, basic SAST) and mature incrementally.

Typical Challenges

The greatest challenge is developer adoption: security tools that generate excessive false positives are quickly ignored, and security gates that block delivery without clear justification are circumvented. Tool tuning is essential -- invest time in configuring SAST rules, suppressing known false positives, and establishing severity thresholds that developers trust. Security champions are hard to recruit and retain: the role requires security interest plus development credibility, and champions often burn out if they become the sole security contact for their team. Threat modelling adoption is inconsistent: teams understand the value but struggle to find time during sprint planning, leading to threat models that are either skipped or produced as compliance artefacts rather than genuine design exercises. Legacy applications present the biggest gap: applying modern SSDL to new development is relatively straightforward, but retrofitting security into legacy codebases generates overwhelming finding volumes that paralyse remediation. Dependency management is a constant battle: modern applications have hundreds of transitive dependencies, vulnerability alerts arrive daily, and patching one dependency often breaks another. Measuring SSDL effectiveness is difficult: fewer production vulnerabilities could mean the process is working or that nobody is looking hard enough.

Threat Resistance

Secure SDLC prevents vulnerabilities from reaching production. Injection attacks (SQL, command, LDAP, XSS) are caught by SAST rules that detect unsanitised input flowing to dangerous sinks, and by DAST scans that probe for injection points in running applications (SA-11, SI-10). Known vulnerable dependencies are detected by SCA before deployment and monitored continuously post-deployment, preventing exploitation of published CVEs (SR-03, RA-05). Insecure authentication and session management flaws are identified through threat modelling (design phase), secure coding standards (implementation phase), and security testing (verification phase), providing multiple layers of defence (SA-08, SA-11). Supply chain attacks through compromised dependencies are mitigated by dependency pinning, integrity verification, and SCA monitoring that detects unexpected changes in package behaviour (SR-03, SR-04). Hardcoded credentials and secrets in source code are caught by pre-commit hooks and CI scanning before they reach the repository, preventing credential exposure through repository leaks (SA-11, IA-05). Business logic vulnerabilities -- the category that automated tools miss -- are addressed through threat modelling that identifies abuse cases and penetration testing that targets complex attack chains (SA-11, CA-08).

Assumptions

The organisation develops software (in-house or through contracted development). A CI/CD pipeline exists or is being established (see SP-028). Development teams use version control (Git). Some security awareness exists within the development organisation. Management supports the investment in security tooling and the acceptance of security gates that may slow delivery. Budget exists for SAST/DAST/SCA tooling (commercial or open-source).

Developing Areas

  • AI-assisted code generation is creating new vulnerability patterns that traditional SAST tools were not designed to detect. Large language models (GitHub Copilot, Amazon CodeWhisperer, Cursor) generate plausible but subtly insecure code -- using deprecated cryptographic functions, introducing race conditions, or producing input validation that handles common cases but fails on adversarial input. Studies show that AI-generated code contains vulnerabilities at rates comparable to or higher than human-written code, and developers using AI assistants tend to review generated code less critically. Security tooling specifically designed to audit AI-generated code is emerging but immature.
  • Software Bill of Materials (SBOM) adoption is advancing through regulatory pressure (US Executive Order 14028, EU Cyber Resilience Act) but consumption maturity lags far behind generation maturity. Organisations can produce SBOMs in SPDX or CycloneDX format, but the tooling to ingest SBOMs from suppliers, correlate them with vulnerability databases, and integrate findings into existing vulnerability management workflows is fragmented. The challenge is not producing SBOMs but building operational processes around them -- particularly for organisations consuming hundreds of SBOMs from their software supply chain.
  • Software attestation frameworks, particularly SLSA (Supply-chain Levels for Software Artifacts), define graduated levels of supply chain integrity assurance from basic provenance tracking (Level 1) to hermetic, reproducible builds (Level 3). While SLSA provides a clear maturity model, achieving Level 2 or above requires significant build infrastructure investment including isolated build environments, signed provenance metadata, and tamper-proof build logs. Fewer than 10% of organisations have implemented SLSA beyond Level 1, and the tooling ecosystem (Sigstore, in-toto, Witness) is still consolidating.
  • Shift-left fatigue is becoming a measurable problem in organisations that have aggressively embedded security checks into developer workflows. When SAST, SCA, secrets scanning, licence compliance, container scanning, and infrastructure-as-code scanning all run on every pull request, build times increase, alert volumes overwhelm, and developers begin ignoring or bypassing security findings. The emerging response is intelligent orchestration platforms (Apiiro, Ox Security) that risk-rank findings and route only material alerts to developers, but calibrating these systems to avoid both alert fatigue and genuine-miss risk is an ongoing challenge.
  • Security champion programme effectiveness is difficult to measure and programmes frequently stall after initial enthusiasm. Research suggests that effective programmes require at least 10% dedicated time allocation, executive sponsorship, career path incentives, and a community structure that prevents champions from becoming isolated. Organisations that treat the champion role as a volunteer add-on to an already full development workload report attrition rates above 40% annually, undermining the distributed security model they are trying to build.
AT: 1AU: 2CA: 2CM: 4IA: 1IR: 1PL: 1RA: 2SA: 5SC: 1SI: 4SR: 2
AT-03 Role-Based Training
AU-02 Event Logging
AU-12 Audit Record Generation
CA-02 Control Assessments
CA-08 Penetration Testing
CM-02 Baseline Configuration
CM-03 Configuration Change Control
CM-04 Impact Analyses
CM-06 Configuration Settings
IA-05 Authenticator Management
IR-06 Incident Reporting
PL-08 Security and Privacy Architectures
RA-03 Risk Assessment
RA-05 Vulnerability Monitoring and Scanning
SA-04 Acquisition Process
SA-08 Security and Privacy Engineering Principles
SA-11 Developer Testing and Evaluation
SA-15 Development Process, Standards, and Tools
SA-16 Developer-Provided Training
SC-07 Boundary Protection
SI-02 Flaw Remediation
SI-04 System Monitoring
SI-06 Security and Privacy Function Verification
SI-10 Information Input Validation
SR-03 Supply Chain Controls and Processes
SR-04 Provenance