← Patterns / SP-041 Archived

Secure Application Baseline for Developers

Security control frameworks like NIST 800-53 define what an organisation should do, not how a developer should build it. A control statement such as 'the organisation develops, documents, and maintains a current baseline configuration of the information system' (CM-02) tells a compliance officer what to audit but tells an engineer nothing about Terraform state files, Kubernetes admission controllers, or CIS benchmark automation. This gap is structural. Control frameworks are deliberately technology-agnostic and written at an organisational level. The translation from control requirement to code implementation happens informally — in the heads of senior engineers, in tribal knowledge, in ad-hoc wiki pages that rot within months. The result: security teams write policies that developers cannot action, developers build systems that auditors cannot map to controls, and the organisation passes its audit through manual evidence collection rather than engineering discipline. This pattern provides the missing translation layer. For each control area relevant to application development, it defines three operations that a developer can execute against their own codebase and infrastructure: 1. DETECT — Automated or semi-automated checks to determine what is already implemented. These map to specific tools, commands, and configuration inspections that a developer can run today. 2. GAP — How to compare the current state against the control requirement and identify what is missing. This produces a concrete list of items to implement, not an abstract risk statement. 3. IMPLEMENT — Practical guidance for closing each gap using modern tools and patterns. Framework-agnostic where possible, with specific examples for common stacks (AWS/GCP/Azure, Kubernetes, Terraform, GitHub Actions/GitLab CI). The pattern is organised around developer concerns rather than control families. A developer thinks about 'how do I manage secrets' not 'how do I satisfy SC-12, SC-28, and IA-05'. The NIST mapping is provided for each area so that compliance teams can trace implementations back to controls. This is the pattern that makes other patterns actionable. SP-027 (Secure AI Integration) tells you what controls an AI system needs. SP-041 tells you how to implement them. Assessment scoring for this pattern uses a developer-centric maturity scale rather than the traditional CMMI model: 1. Not implemented, not planned 2. Planned, not yet implemented 3. Implemented in dev/test/QA, not live in production 4. Live in production, not monitored 5. Live in production, measured and monitored This maps to the actual deployment lifecycle that engineering teams experience. A control at level 3 has working code in a staging environment but has not yet been promoted to production. A control at level 4 is running in production but lacks observability — it works until it doesn't, and you won't know when it stops. Level 5 means the control is deployed, measured, and has alerting that detects degradation or bypass.
Release: 26.02 Authors: Samplawski, Vitruvius Updated: 2026-03-14
Assess
Archived This pattern has been archived. Its content has been merged into SP-028. View SP-028 →
ATT&CK This pattern addresses 459 techniques across 13 tactics View on ATT&CK Matrix →
SECURE APPLICATION BASELINE — GOVERNANCE CM-02 CM-06 CM-07 | AC-03 IA-02 | SC-07 SC-12 SC-28 | SR-03 SR-04 | SA-11 CONTROL FRAMEWORK (Compliance Language) "The organisation shall develop, document, and maintain a current baseline configuration..." ? DEVELOPER IMPLEMENTATION (Engineering Language) terraform plan · kube-bench · gitleaks · trivy scan · cosign verify · npm audit SP-041 1. DETECT What is already implemented? Infrastructure Baseline terraform plan · driftctl · AWS Config rules CM-02 Secrets Scanning gitleaks · truffleHog · GitHub secret scanning SC-12 Dependency Audit SBOM (CycloneDX) · Trivy · npm audit · Snyk SR-03 Auth Path Mapping Endpoint auth audit · session config · RBAC review AC-03 Container Security Trivy image scan · kube-bench · Docker Bench CM-07 Log Coverage Audit Structured JSON check · PII scan · event coverage AU-02 Pipeline Security Review SAST/DAST gates · artifact signing · branch protection SA-11 2. GAP What is missing vs. control requirements? • Resources not declared in IaC · CIS benchmark failures • Secrets in code · stale credentials · no rotation policy • Missing lock files · unpinned deps · high-sev CVEs unfixed • Endpoints without auth · client-side authz · custom auth • Root containers · writable FS · excessive capabilities • Unstructured logs · PII in logs · missing security events • No SAST gate · unsigned artifacts · no deploy approval OUTPUT: Engineering Backlog Concrete tickets, not abstract risk statements Mapped to NIST controls for compliance traceability 3. IMPLEMENT How to close each gap IaC + CIS Benchmarks Terraform/Pulumi · SCPs · Prowler in CI · drift alerts CM-06 Secrets Manager + Workload Identity Vault/AWS SM · IRSA · 90-day rotation · pre-commit hooks SC-28 SBOM + SLSA + Scanning Lock files · Dependabot · severity-gated CI · provenance SR-04 OIDC + RBAC + mTLS Standard IdP · middleware authz · short-lived JWTs · MFA IA-02 Distroless + Pod Security Non-root · read-only FS · drop caps · network policies SC-07 Structured JSON Logging Consistent schema · correlation IDs · no PII · SIEM forward AU-03 Signed Artifacts + Gates Cosign · Semgrep/CodeQL in CI · deploy approvals · audit trail CM-14 CONTINUOUS VERIFICATION · CI/CD Pipeline Integration Automated scanning on every commit · Merge gates block regressions · Compliance evidence generated continuously CA-07 SA-11 SI-07 CA-02 SP-027 Secure AI Integration What controls an AI system needs SP-041 tells you how to implement them SP-028 Secure DevOps Pipeline CI/CD architecture and controls SP-041 provides the implementation detail SP-030 API Security Full API security architecture SP-041 covers developer hardening SP-012 Secure SDLC Development lifecycle governance SP-041 is the developer's implementation guide SP-041: Secure Application Baseline for Developers 32 NIST 800-53 Rev 5 controls across 9 families · Authors: Vitruvius, Samplawski · Draft · 2026-02-07 XX-00 NIST control (click to view) opensecurityarchitecture.org CIS Benchmarks · OWASP ASVS · SLSA · CycloneDX SBOM · Sigstore · K8s Pod Security · OWASP Cheat Sheets · OpenSSF Scorecard

Click any control badge to view its details. Download SVG

Key Control Areas

  • Infrastructure Baseline as Code (CM-02, CM-06, CM-07): DETECT — Scan for infrastructure drift between declared state (Terraform/CloudFormation/Pulumi) and actual cloud resources using tools like terraform plan, driftctl, or AWS Config rules. Check for undocumented manual changes via cloud provider audit trails. GAP — Compare deployed configuration against CIS Benchmarks for your platform (AWS CIS, Kubernetes CIS, Docker CIS) using automated scanners (Prowler, kube-bench, Docker Bench). Any resource not in IaC or any configuration diverging from benchmark is a gap. IMPLEMENT — Declare all infrastructure in version-controlled IaC with no manual console changes permitted. Enforce via Service Control Policies (AWS) or Organisation Policies (GCP). Run CIS benchmark checks in CI/CD pipeline as a gate. Tag all resources with owning team and IaC source reference. Drift detection on a scheduled basis with alerts on divergence.
  • Secrets and Key Management (SC-12, SC-28, IA-05): DETECT — Scan repositories for committed secrets using gitleaks, truffleHog, or GitHub secret scanning. Check environment variables and config files for plaintext credentials. Audit key rotation dates in your secrets manager. GAP — Any secret in source code, any credential older than rotation policy, any service using long-lived API keys instead of short-lived tokens, any secret not in a dedicated secrets manager. IMPLEMENT — Centralise all secrets in HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, or Azure Key Vault. Application code reads secrets from the manager at runtime, never from environment variables baked into images. Automate rotation with maximum 90-day lifecycle. Use workload identity (IRSA, Workload Identity Federation) instead of static credentials. Pre-commit hooks block secret commits.
  • Dependency and Supply Chain Security (SR-03, SR-04, SA-04, SA-11): DETECT — Generate SBOM (CycloneDX or SPDX format) from your package manager (npm, pip, Maven, Go modules). Run vulnerability scans (Snyk, Trivy, Grype, npm audit). Check for SLSA provenance attestations on critical dependencies. GAP — Any dependency without a known provenance, any vulnerability above your severity threshold unfixed beyond SLA, any missing lock file, any dependency pulling from unofficial registries. IMPLEMENT — Pin all dependency versions via lock files. Enable automated vulnerability scanning in CI with severity-gated blocking (critical/high block merge). Generate and publish SBOM with each release. Verify SLSA provenance for critical dependencies. Use private registry mirrors for supply chain isolation. Dependabot or Renovate for automated update PRs with test validation.
  • Authentication and Authorisation (AC-03, AC-06, IA-02, IA-08): DETECT — Map all authentication paths in your application (login, API keys, service-to-service, OAuth flows). Check session configuration (timeout, rotation, secure flags). Audit authorisation checks on every endpoint. GAP — Any endpoint without authentication, any authorisation check done client-side only, any use of custom auth instead of standard protocols (OIDC/OAuth 2.0), any service-to-service communication without mutual authentication, any hardcoded roles or permissions. IMPLEMENT — Use OIDC for user authentication via an established IdP (Auth0, Okta, Keycloak, cloud-native). Implement RBAC or ABAC at the API gateway or middleware layer. Every API endpoint must declare its required permissions. Service-to-service uses mTLS or signed JWTs with short expiry. Session tokens: HttpOnly, Secure, SameSite=Strict, rotated on privilege change. MFA for all privileged operations.
  • Structured Logging and Audit Trail (AU-02, AU-03, AU-06, AU-12): DETECT — Check log output format (structured JSON vs unstructured text). Verify what events are logged: authentication success/failure, authorisation decisions, data access, configuration changes, errors. Check for PII in logs. GAP — Any security-relevant event not logged, any log in unstructured format, any PII or secrets appearing in logs, any log not forwarded to centralised collection, any gap in log retention below regulatory requirement. IMPLEMENT — Structured JSON logging with consistent schema: timestamp, correlation ID, user ID, action, resource, outcome, source IP. Log all authentication events, authorisation failures, data access to sensitive resources, configuration changes, and error conditions. Never log passwords, tokens, PII, or request/response bodies containing sensitive data. Forward to centralised logging (ELK, Datadog, CloudWatch Logs) with retention meeting regulatory requirements. Alert on anomalous patterns.
  • Input Validation and Output Encoding (SI-10, SI-15, SC-08): DETECT — Audit all user input entry points: API request bodies, query parameters, headers, file uploads, WebSocket messages. Check for parameterised queries vs string concatenation in database access. Check output encoding in HTML rendering. GAP — Any user input used without validation, any SQL/NoSQL query built with string concatenation, any HTML output without encoding, any file upload without type/size validation, any API accepting unbounded input. IMPLEMENT — Validate all input at the API boundary using schema validation (JSON Schema, Zod, Joi, Pydantic). Reject by default, allowlist by exception. Use parameterised queries exclusively for all database operations. Apply context-appropriate output encoding (HTML entity encoding, URL encoding, JavaScript escaping). Limit file upload types, sizes, and scan for malware. Set Content-Security-Policy headers. Rate limit all endpoints.
  • Container and Runtime Security (CM-07, SC-07, SI-07, SC-28): DETECT — Scan container images for vulnerabilities (Trivy, Grype). Check Dockerfile for anti-patterns: running as root, large base images, secrets in build layers. Audit Kubernetes pod security: privileged containers, host network, writable root filesystem. GAP — Any container running as root, any image from untrusted registry, any container with writable root filesystem, any pod with excessive Linux capabilities, any missing network policy allowing unrestricted pod-to-pod communication. IMPLEMENT — Minimal base images (distroless, Alpine). Run as non-root user (USER directive in Dockerfile). Read-only root filesystem with tmpfs for write paths. Drop all Linux capabilities except required. Kubernetes Pod Security Standards (restricted profile). Network policies enforcing least-privilege pod communication. Image signing with Cosign/Notation and admission controller verification (Kyverno, OPA Gatekeeper). No latest tag — pin image digests in production.
  • CI/CD Pipeline Security (SA-11, CM-14, SI-07, SA-15): DETECT — Audit pipeline configuration: who can modify pipeline definitions, what secrets are available to builds, whether artifacts are signed, what quality gates exist. Check for direct pushes bypassing pipeline. GAP — Any pipeline without SAST/DAST scanning, any artifact deployed without signing, any pipeline secret accessible to arbitrary branches, any production deployment without approval gate, any ability to push directly to main branch. IMPLEMENT — Branch protection: require PR review and passing CI before merge. SAST scanning (Semgrep, CodeQL, SonarQube) as merge gate. DAST scanning against staging environments. Sign all build artifacts (Cosign for containers, GPG for packages). Separate build and deploy credentials with least privilege. Pipeline-as-code in version control (no UI-only configuration). Deployment approvals for production. Audit trail of all deployments with rollback capability.
  • API Security Hardening (SC-08, AC-04, SC-13, SI-10): DETECT — Inventory all API endpoints (OpenAPI spec, route listing). Check TLS configuration (SSL Labs scan). Verify rate limiting, authentication, and input validation on each endpoint. Check for information disclosure in error responses. GAP — Any API endpoint without TLS, any endpoint without rate limiting, any endpoint returning stack traces or internal details in errors, any endpoint without authentication (that should have it), any missing CORS configuration. IMPLEMENT — TLS 1.2+ on all endpoints with strong cipher suites. Rate limiting per client and per endpoint (token bucket or sliding window). Schema validation on all request bodies (reject malformed early). Generic error responses in production (no stack traces, no internal paths). CORS allowlist (never wildcard in production). API versioning strategy. Request size limits. Timeout configuration to prevent slow-loris. mTLS for service-to-service APIs. See SP-030 API Security for the full architectural pattern.

When to Use

Engineering teams building cloud-native applications who need to satisfy security control requirements. Startups and scale-ups approaching their first security review or compliance certification. Development teams receiving audit findings that they cannot translate into engineering work. Organisations where security architecture exists on paper but implementation is inconsistent. Any team where the question 'how do I actually implement this control?' goes unanswered.

When NOT to Use

Organisations with mature security engineering practices and established internal implementation guides. Legacy systems where infrastructure-as-code and CI/CD pipelines are not feasible without prior modernisation. Operational technology and embedded systems where the development model is fundamentally different from cloud-native application development.

Typical Challenges

Developers perceive security controls as compliance overhead rather than engineering discipline. Security teams write policies in control framework language that developers cannot action. Gap assessments produce risk registers instead of engineering backlogs. Implementation guidance is generic and does not account for the specific technology stack. Security scanning tools generate noise that teams learn to ignore. The translation from control to implementation lives in tribal knowledge and informal documentation that becomes stale.

Threat Resistance

Addresses the implementation gap that allows controls to exist on paper without corresponding technical implementation. Reduces attack surface through automated baseline enforcement and drift detection. Prevents common vulnerability classes (injection, broken authentication, secrets exposure) through systematic input validation, standard auth patterns, and secrets management. Supply chain security through SBOM generation, dependency scanning, and provenance verification. CI/CD hardening prevents pipeline compromise and ensures artifact integrity.

Assumptions

The application team uses a modern development stack with version control (Git), CI/CD pipelines, cloud infrastructure (AWS/GCP/Azure or self-hosted Kubernetes), and containerised deployments. The team has access to standard open-source security tooling. The organisation has identified which NIST 800-53 controls apply to their system.

Developing Areas

  • Secure-by-default framework adoption is accelerating, with memory-safe languages (Rust, Go, Swift) gaining traction for security-critical components. The US government (ONCD, CISA) has explicitly recommended migration away from memory-unsafe languages like C and C++. However, the existing codebase in C/C++ is measured in billions of lines, rewrite economics are prohibitive for most organisations, and the developer talent pool for Rust remains limited. The practical path for most teams is incremental adoption of memory-safe languages for new components rather than wholesale migration.
  • Runtime Application Self-Protection (RASP) effectiveness remains debated in the security community. RASP promises to detect and block attacks from within the application at runtime, but production deployments report performance overhead of 5-15%, high false positive rates for complex business logic, and difficulty distinguishing legitimate edge cases from attack patterns. The market is consolidating around fewer vendors, and most organisations that deploy RASP use it in monitoring mode rather than blocking mode, limiting its protective value.
  • Interactive Application Security Testing (IAST) adoption is growing as a middle ground between SAST (high false positives, no runtime context) and DAST (limited code coverage, slow feedback). IAST instruments the running application to observe data flow and detect vulnerabilities with runtime context, producing fewer false positives than SAST. However, IAST requires instrumentation agents that may not support all language runtimes, and coverage depends on test suite quality. The tool category is approximately 5 years behind SAST/DAST in ecosystem maturity.
  • Software composition analysis (SCA) accuracy is improving but still generates significant noise. The core challenge is determining whether a vulnerable dependency is actually reachable in the application's execution path -- a library may contain a vulnerable function that the application never calls. Reachability analysis is an emerging SCA capability that reduces false positives by 40-60% in early implementations, but it requires deep language-specific analysis that not all SCA tools support across all ecosystems.
  • AI-generated code security baseline is an urgent developing area as adoption of coding assistants (GitHub Copilot, Cursor, Claude) reaches 50%+ among professional developers. Studies show AI-generated code contains security vulnerabilities at rates comparable to human-written code, but the speed of generation means more vulnerable code reaches review faster. Establishing security scanning gates specifically for AI-assisted development -- mandatory SAST on all AI-generated suggestions, automated secret detection, and dependency validation -- is becoming a critical pipeline requirement that most organisations have not yet formalised.
CM: 5SC: 5SI: 3SA: 4AC: 3IA: 3AU: 4SR: 2CA: 2PL: 1PM: 1
CM-02 Baseline Configuration
CM-06 Configuration Settings
CM-07 Least Functionality
SC-12 Cryptographic Key Establishment and Management
SI-10 Information Input Validation
SA-11 Developer Testing and Evaluation
AC-03 Access Enforcement
AC-06 Least Privilege
IA-02 Identification and Authentication (Organizational Users)
IA-05 Authenticator Management
IA-08 Identification and Authentication (Non-Organizational Users)
SC-08 Transmission Confidentiality and Integrity
SC-28 Protection of Information at Rest
SC-07 Boundary Protection
SC-13 Cryptographic Protection
SI-07 Software, Firmware, and Information Integrity
SI-15 Information Output Filtering
SR-03 Supply Chain Controls and Processes
SR-04 Provenance
SA-04 Acquisition Process
SA-15 Development Process, Standards, and Tools
CM-03 Configuration Change Control
CM-14 Signed Components
AU-02 Event Logging
AU-03 Content of Audit Records
AU-06 Audit Record Review, Analysis, and Reporting
AU-12 Audit Record Generation
AC-04 Information Flow Enforcement
CA-07 Continuous Monitoring
CA-02 Control Assessments
PL-02 System Security and Privacy Plans
PM-14 Testing, Training, and Monitoring