Secure Software Development Lifecycle
Click any control badge to view its details. Download SVG
Key Control Areas
- Threat Modelling and Secure Design (SA-08, SA-11, SA-15, PL-08, RA-03): Security by design starts before code. SA-08 establishes security and privacy engineering principles: defence in depth, least privilege, fail-secure defaults, separation of duties, and minimising attack surface. These principles guide architectural decisions and provide the criteria against which designs are evaluated. SA-11 mandates developer testing and evaluation including security testing: threat modelling during design reviews using STRIDE, attack trees, or equivalent methodologies to identify threats at the architectural level. SA-15 defines the development process standards: when threat models must be produced (new services, major changes, new data flows), who participates (developer, architect, security champion), and what the outputs are (threat register, mitigations mapped to controls). PL-08 establishes the security architecture: reference architectures for common patterns (web application, API service, event-driven, microservices) that embed security controls by default. RA-03 performs risk assessment at the design phase: data classification determines security requirements, regulatory context determines compliance controls, and threat context determines the depth of security review required.
- Secure Coding Standards and Developer Training (SA-08, SA-11, AT-03, SA-16, CM-04): Developers are the first line of defence. SA-08 defines secure coding standards: language-specific guidelines (OWASP Secure Coding Practices, SEI CERT standards) covering input validation, output encoding, authentication, session management, access control, cryptography, error handling, and logging. SA-11 requires that developers test their own code for security: unit tests that cover security-relevant behaviour (authentication bypass, authorization boundary, input validation edge cases), not just functional requirements. AT-03 provides role-based security training: annual secure coding training for all developers, advanced training for security champions, and just-in-time training triggered by common vulnerability findings in code reviews. SA-16 provides developer security resources: internal secure coding guidelines, approved cryptographic libraries, authentication/authorization frameworks, and reusable security components that make the secure way the easy way. CM-04 analyses security impact of changes: security-relevant code changes (authentication, authorization, cryptography, input handling) require review by a security champion or AppSec engineer before merge.
- Static Analysis and Code Review (SA-11, CM-04, SA-15, RA-05, SI-10): Automated analysis catches what humans miss. SA-11 mandates static application security testing (SAST): tools integrated into CI/CD that scan source code for vulnerability patterns -- injection flaws, XSS, insecure deserialization, hardcoded credentials, and cryptographic misuse. SAST runs on every pull request, blocking merge when high-severity findings are confirmed. CM-04 requires security review of changes: pull request reviews include security considerations, and changes to security-sensitive code paths trigger mandatory review by a security champion. SA-15 defines SAST tool configuration: rule sets tuned to reduce false positives (the fastest way to make developers ignore security tooling), custom rules for organisation-specific patterns, and severity thresholds that distinguish blocking findings from advisory findings. RA-05 scans for vulnerabilities: SAST is one input to vulnerability management, with findings tracked, triaged, and remediated within defined SLAs. SI-10 validates information input: static analysis rules specifically targeting input validation weaknesses, ensuring that all external input is validated before use.
- Software Composition Analysis and Supply Chain (SA-11, SR-03, SR-04, SA-04, RA-05): Most code in modern applications is third-party. SA-11 includes software composition analysis (SCA): tools that scan dependency manifests (package.json, requirements.txt, pom.xml, go.mod) to identify known vulnerabilities in third-party libraries. SR-03 establishes supply chain controls: approved package registries, dependency pinning, integrity verification (checksums, signatures), and policies for acceptable licence types. SR-04 provides supply chain provenance: tracking where dependencies come from, verifying publisher identity, and monitoring for dependency confusion or typosquatting attacks. SA-04 governs the acquisition process: security requirements for third-party components including vulnerability disclosure policies, patch SLAs, and end-of-life commitments. RA-05 monitors for new vulnerabilities: SCA runs continuously (not just at build time) to detect newly disclosed CVEs in deployed dependencies, triggering patching workflows for critical findings. Secrets scanning is a related capability: detecting API keys, passwords, and tokens accidentally committed to source repositories, with pre-commit hooks as the first line and CI scanning as the backstop.
- Dynamic Analysis and Security Testing (SA-11, CA-08, RA-05, SI-06, SC-07): Test the running application. SA-11 mandates dynamic application security testing (DAST): automated scanning of deployed applications to identify runtime vulnerabilities -- injection points, authentication weaknesses, insecure headers, TLS configuration issues, and information disclosure. DAST runs against staging environments as part of the release pipeline. CA-08 conducts penetration testing: manual security testing for high-risk applications, performed by internal AppSec engineers or external specialists, targeting business logic flaws and complex attack chains that automated tools miss. RA-05 integrates dynamic findings into vulnerability management: DAST and penetration test findings are tracked alongside SAST and SCA findings with consistent severity classification and SLAs. SI-06 verifies functional security: testing that security controls work as designed -- authentication cannot be bypassed, authorization boundaries hold, rate limiting functions, and audit logging captures all security-relevant events. SC-07 tests boundary protection: verifying that network-level security controls (WAF rules, API gateway policies, CORS configuration) function correctly in the deployed environment.
- Security Gates and Release Management (SA-11, CM-02, CM-03, SA-15, CA-02): Nothing ships without passing security checks. SA-11 defines security gate criteria for release: maximum acceptable vulnerability counts by severity (zero critical, limited high, tracked medium), SCA licence compliance, secrets scanning clean, and DAST baseline pass. CM-02 establishes secure baseline configurations for deployment: hardened container images, secure default configurations, and infrastructure-as-code templates that embed security controls. CM-03 governs configuration change control: release processes that include security sign-off for changes affecting security posture, with change classification determining the level of review required. SA-15 defines the development process security requirements: what evidence must be produced at each phase (threat model, SAST results, SCA report, DAST results, penetration test report) to demonstrate due diligence. CA-02 assesses control effectiveness: regular evaluation of whether security gates are functioning (are findings being detected, are they being remediated, are SLAs being met) with metrics reported to security governance.
- Vulnerability Management and Maintenance (RA-05, SI-02, SA-11, CM-04, IR-06): Security doesn't stop at release. RA-05 mandates ongoing vulnerability monitoring: continuous scanning of deployed applications and their dependencies for newly discovered vulnerabilities, with automated alerting and triage workflows. SI-02 governs flaw remediation: defined SLAs for patching based on severity (critical: 24-48 hours, high: 7 days, medium: 30 days, low: 90 days), with exception processes for cases where immediate patching isn't feasible. SA-11 extends to maintenance: regression testing after patches to confirm fixes and ensure no new vulnerabilities are introduced. CM-04 analyses security impact of patches: ensuring that security patches don't break functionality and that dependency updates don't introduce new vulnerabilities. IR-06 covers vulnerability disclosure: processes for receiving vulnerability reports from external researchers, triaging them, coordinating fixes, and issuing advisories. Bug bounty programmes complement automated scanning by incentivising external researchers to find vulnerabilities that tools miss.
When to Use
This pattern applies to every organisation that develops or commissions software. It is particularly critical for: organisations developing internet-facing applications, SaaS providers processing customer data, financial services firms subject to DORA's ICT risk management requirements, organisations subject to PCI DSS requirement 6 (develop and maintain secure systems), healthcare organisations developing systems that process patient data, organisations with regulatory obligations around software security (FCA, SEC, PRA), any organisation that has experienced a security incident caused by a software vulnerability, and organisations adopting DevOps or CI/CD who want to embed security from the start rather than retrofit it.
When NOT to Use
Organisations that exclusively use SaaS products and do no custom development do not need a full SSDL -- but they should still assess the SSDL practices of their SaaS vendors (see SA-04). Very small development teams (1-3 developers) should prioritise automated tooling (SAST, SCA, secrets scanning) over process-heavy approaches like formal threat modelling -- a lightweight security checklist at design time may suffice. Organisations should not attempt to implement all SSDL practices simultaneously: start with the highest-value automated checks (SCA for known CVEs, secrets scanning, basic SAST) and mature incrementally.
Typical Challenges
The greatest challenge is developer adoption: security tools that generate excessive false positives are quickly ignored, and security gates that block delivery without clear justification are circumvented. Tool tuning is essential -- invest time in configuring SAST rules, suppressing known false positives, and establishing severity thresholds that developers trust. Security champions are hard to recruit and retain: the role requires security interest plus development credibility, and champions often burn out if they become the sole security contact for their team. Threat modelling adoption is inconsistent: teams understand the value but struggle to find time during sprint planning, leading to threat models that are either skipped or produced as compliance artefacts rather than genuine design exercises. Legacy applications present the biggest gap: applying modern SSDL to new development is relatively straightforward, but retrofitting security into legacy codebases generates overwhelming finding volumes that paralyse remediation. Dependency management is a constant battle: modern applications have hundreds of transitive dependencies, vulnerability alerts arrive daily, and patching one dependency often breaks another. Measuring SSDL effectiveness is difficult: fewer production vulnerabilities could mean the process is working or that nobody is looking hard enough.
Threat Resistance
Secure SDLC prevents vulnerabilities from reaching production. Injection attacks (SQL, command, LDAP, XSS) are caught by SAST rules that detect unsanitised input flowing to dangerous sinks, and by DAST scans that probe for injection points in running applications (SA-11, SI-10). Known vulnerable dependencies are detected by SCA before deployment and monitored continuously post-deployment, preventing exploitation of published CVEs (SR-03, RA-05). Insecure authentication and session management flaws are identified through threat modelling (design phase), secure coding standards (implementation phase), and security testing (verification phase), providing multiple layers of defence (SA-08, SA-11). Supply chain attacks through compromised dependencies are mitigated by dependency pinning, integrity verification, and SCA monitoring that detects unexpected changes in package behaviour (SR-03, SR-04). Hardcoded credentials and secrets in source code are caught by pre-commit hooks and CI scanning before they reach the repository, preventing credential exposure through repository leaks (SA-11, IA-05). Business logic vulnerabilities -- the category that automated tools miss -- are addressed through threat modelling that identifies abuse cases and penetration testing that targets complex attack chains (SA-11, CA-08).
Assumptions
The organisation develops software (in-house or through contracted development). A CI/CD pipeline exists or is being established (see SP-028). Development teams use version control (Git). Some security awareness exists within the development organisation. Management supports the investment in security tooling and the acceptance of security gates that may slow delivery. Budget exists for SAST/DAST/SCA tooling (commercial or open-source).
Developing Areas
- AI-assisted code generation is creating new vulnerability patterns that traditional SAST tools were not designed to detect. Large language models (GitHub Copilot, Amazon CodeWhisperer, Cursor) generate plausible but subtly insecure code -- using deprecated cryptographic functions, introducing race conditions, or producing input validation that handles common cases but fails on adversarial input. Studies show that AI-generated code contains vulnerabilities at rates comparable to or higher than human-written code, and developers using AI assistants tend to review generated code less critically. Security tooling specifically designed to audit AI-generated code is emerging but immature.
- Software Bill of Materials (SBOM) adoption is advancing through regulatory pressure (US Executive Order 14028, EU Cyber Resilience Act) but consumption maturity lags far behind generation maturity. Organisations can produce SBOMs in SPDX or CycloneDX format, but the tooling to ingest SBOMs from suppliers, correlate them with vulnerability databases, and integrate findings into existing vulnerability management workflows is fragmented. The challenge is not producing SBOMs but building operational processes around them -- particularly for organisations consuming hundreds of SBOMs from their software supply chain.
- Software attestation frameworks, particularly SLSA (Supply-chain Levels for Software Artifacts), define graduated levels of supply chain integrity assurance from basic provenance tracking (Level 1) to hermetic, reproducible builds (Level 3). While SLSA provides a clear maturity model, achieving Level 2 or above requires significant build infrastructure investment including isolated build environments, signed provenance metadata, and tamper-proof build logs. Fewer than 10% of organisations have implemented SLSA beyond Level 1, and the tooling ecosystem (Sigstore, in-toto, Witness) is still consolidating.
- Shift-left fatigue is becoming a measurable problem in organisations that have aggressively embedded security checks into developer workflows. When SAST, SCA, secrets scanning, licence compliance, container scanning, and infrastructure-as-code scanning all run on every pull request, build times increase, alert volumes overwhelm, and developers begin ignoring or bypassing security findings. The emerging response is intelligent orchestration platforms (Apiiro, Ox Security) that risk-rank findings and route only material alerts to developers, but calibrating these systems to avoid both alert fatigue and genuine-miss risk is an ongoing challenge.
- Security champion programme effectiveness is difficult to measure and programmes frequently stall after initial enthusiasm. Research suggests that effective programmes require at least 10% dedicated time allocation, executive sponsorship, career path incentives, and a community structure that prevents champions from becoming isolated. Organisations that treat the champion role as a volunteer add-on to an already full development workload report attrition rates above 40% annually, undermining the distributed security model they are trying to build.