Secure Application Baseline for Developers
Click any control badge to view its details. Download SVG
Key Control Areas
- Infrastructure Baseline as Code (CM-02, CM-06, CM-07): DETECT — Scan for infrastructure drift between declared state (Terraform/CloudFormation/Pulumi) and actual cloud resources using tools like terraform plan, driftctl, or AWS Config rules. Check for undocumented manual changes via cloud provider audit trails. GAP — Compare deployed configuration against CIS Benchmarks for your platform (AWS CIS, Kubernetes CIS, Docker CIS) using automated scanners (Prowler, kube-bench, Docker Bench). Any resource not in IaC or any configuration diverging from benchmark is a gap. IMPLEMENT — Declare all infrastructure in version-controlled IaC with no manual console changes permitted. Enforce via Service Control Policies (AWS) or Organisation Policies (GCP). Run CIS benchmark checks in CI/CD pipeline as a gate. Tag all resources with owning team and IaC source reference. Drift detection on a scheduled basis with alerts on divergence.
- Secrets and Key Management (SC-12, SC-28, IA-05): DETECT — Scan repositories for committed secrets using gitleaks, truffleHog, or GitHub secret scanning. Check environment variables and config files for plaintext credentials. Audit key rotation dates in your secrets manager. GAP — Any secret in source code, any credential older than rotation policy, any service using long-lived API keys instead of short-lived tokens, any secret not in a dedicated secrets manager. IMPLEMENT — Centralise all secrets in HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, or Azure Key Vault. Application code reads secrets from the manager at runtime, never from environment variables baked into images. Automate rotation with maximum 90-day lifecycle. Use workload identity (IRSA, Workload Identity Federation) instead of static credentials. Pre-commit hooks block secret commits.
- Dependency and Supply Chain Security (SR-03, SR-04, SA-04, SA-11): DETECT — Generate SBOM (CycloneDX or SPDX format) from your package manager (npm, pip, Maven, Go modules). Run vulnerability scans (Snyk, Trivy, Grype, npm audit). Check for SLSA provenance attestations on critical dependencies. GAP — Any dependency without a known provenance, any vulnerability above your severity threshold unfixed beyond SLA, any missing lock file, any dependency pulling from unofficial registries. IMPLEMENT — Pin all dependency versions via lock files. Enable automated vulnerability scanning in CI with severity-gated blocking (critical/high block merge). Generate and publish SBOM with each release. Verify SLSA provenance for critical dependencies. Use private registry mirrors for supply chain isolation. Dependabot or Renovate for automated update PRs with test validation.
- Authentication and Authorisation (AC-03, AC-06, IA-02, IA-08): DETECT — Map all authentication paths in your application (login, API keys, service-to-service, OAuth flows). Check session configuration (timeout, rotation, secure flags). Audit authorisation checks on every endpoint. GAP — Any endpoint without authentication, any authorisation check done client-side only, any use of custom auth instead of standard protocols (OIDC/OAuth 2.0), any service-to-service communication without mutual authentication, any hardcoded roles or permissions. IMPLEMENT — Use OIDC for user authentication via an established IdP (Auth0, Okta, Keycloak, cloud-native). Implement RBAC or ABAC at the API gateway or middleware layer. Every API endpoint must declare its required permissions. Service-to-service uses mTLS or signed JWTs with short expiry. Session tokens: HttpOnly, Secure, SameSite=Strict, rotated on privilege change. MFA for all privileged operations.
- Structured Logging and Audit Trail (AU-02, AU-03, AU-06, AU-12): DETECT — Check log output format (structured JSON vs unstructured text). Verify what events are logged: authentication success/failure, authorisation decisions, data access, configuration changes, errors. Check for PII in logs. GAP — Any security-relevant event not logged, any log in unstructured format, any PII or secrets appearing in logs, any log not forwarded to centralised collection, any gap in log retention below regulatory requirement. IMPLEMENT — Structured JSON logging with consistent schema: timestamp, correlation ID, user ID, action, resource, outcome, source IP. Log all authentication events, authorisation failures, data access to sensitive resources, configuration changes, and error conditions. Never log passwords, tokens, PII, or request/response bodies containing sensitive data. Forward to centralised logging (ELK, Datadog, CloudWatch Logs) with retention meeting regulatory requirements. Alert on anomalous patterns.
- Input Validation and Output Encoding (SI-10, SI-15, SC-08): DETECT — Audit all user input entry points: API request bodies, query parameters, headers, file uploads, WebSocket messages. Check for parameterised queries vs string concatenation in database access. Check output encoding in HTML rendering. GAP — Any user input used without validation, any SQL/NoSQL query built with string concatenation, any HTML output without encoding, any file upload without type/size validation, any API accepting unbounded input. IMPLEMENT — Validate all input at the API boundary using schema validation (JSON Schema, Zod, Joi, Pydantic). Reject by default, allowlist by exception. Use parameterised queries exclusively for all database operations. Apply context-appropriate output encoding (HTML entity encoding, URL encoding, JavaScript escaping). Limit file upload types, sizes, and scan for malware. Set Content-Security-Policy headers. Rate limit all endpoints.
- Container and Runtime Security (CM-07, SC-07, SI-07, SC-28): DETECT — Scan container images for vulnerabilities (Trivy, Grype). Check Dockerfile for anti-patterns: running as root, large base images, secrets in build layers. Audit Kubernetes pod security: privileged containers, host network, writable root filesystem. GAP — Any container running as root, any image from untrusted registry, any container with writable root filesystem, any pod with excessive Linux capabilities, any missing network policy allowing unrestricted pod-to-pod communication. IMPLEMENT — Minimal base images (distroless, Alpine). Run as non-root user (USER directive in Dockerfile). Read-only root filesystem with tmpfs for write paths. Drop all Linux capabilities except required. Kubernetes Pod Security Standards (restricted profile). Network policies enforcing least-privilege pod communication. Image signing with Cosign/Notation and admission controller verification (Kyverno, OPA Gatekeeper). No latest tag — pin image digests in production.
- CI/CD Pipeline Security (SA-11, CM-14, SI-07, SA-15): DETECT — Audit pipeline configuration: who can modify pipeline definitions, what secrets are available to builds, whether artifacts are signed, what quality gates exist. Check for direct pushes bypassing pipeline. GAP — Any pipeline without SAST/DAST scanning, any artifact deployed without signing, any pipeline secret accessible to arbitrary branches, any production deployment without approval gate, any ability to push directly to main branch. IMPLEMENT — Branch protection: require PR review and passing CI before merge. SAST scanning (Semgrep, CodeQL, SonarQube) as merge gate. DAST scanning against staging environments. Sign all build artifacts (Cosign for containers, GPG for packages). Separate build and deploy credentials with least privilege. Pipeline-as-code in version control (no UI-only configuration). Deployment approvals for production. Audit trail of all deployments with rollback capability.
- API Security Hardening (SC-08, AC-04, SC-13, SI-10): DETECT — Inventory all API endpoints (OpenAPI spec, route listing). Check TLS configuration (SSL Labs scan). Verify rate limiting, authentication, and input validation on each endpoint. Check for information disclosure in error responses. GAP — Any API endpoint without TLS, any endpoint without rate limiting, any endpoint returning stack traces or internal details in errors, any endpoint without authentication (that should have it), any missing CORS configuration. IMPLEMENT — TLS 1.2+ on all endpoints with strong cipher suites. Rate limiting per client and per endpoint (token bucket or sliding window). Schema validation on all request bodies (reject malformed early). Generic error responses in production (no stack traces, no internal paths). CORS allowlist (never wildcard in production). API versioning strategy. Request size limits. Timeout configuration to prevent slow-loris. mTLS for service-to-service APIs. See SP-030 API Security for the full architectural pattern.
When to Use
Engineering teams building cloud-native applications who need to satisfy security control requirements. Startups and scale-ups approaching their first security review or compliance certification. Development teams receiving audit findings that they cannot translate into engineering work. Organisations where security architecture exists on paper but implementation is inconsistent. Any team where the question 'how do I actually implement this control?' goes unanswered.
When NOT to Use
Organisations with mature security engineering practices and established internal implementation guides. Legacy systems where infrastructure-as-code and CI/CD pipelines are not feasible without prior modernisation. Operational technology and embedded systems where the development model is fundamentally different from cloud-native application development.
Typical Challenges
Developers perceive security controls as compliance overhead rather than engineering discipline. Security teams write policies in control framework language that developers cannot action. Gap assessments produce risk registers instead of engineering backlogs. Implementation guidance is generic and does not account for the specific technology stack. Security scanning tools generate noise that teams learn to ignore. The translation from control to implementation lives in tribal knowledge and informal documentation that becomes stale.
Threat Resistance
Addresses the implementation gap that allows controls to exist on paper without corresponding technical implementation. Reduces attack surface through automated baseline enforcement and drift detection. Prevents common vulnerability classes (injection, broken authentication, secrets exposure) through systematic input validation, standard auth patterns, and secrets management. Supply chain security through SBOM generation, dependency scanning, and provenance verification. CI/CD hardening prevents pipeline compromise and ensures artifact integrity.
Assumptions
The application team uses a modern development stack with version control (Git), CI/CD pipelines, cloud infrastructure (AWS/GCP/Azure or self-hosted Kubernetes), and containerised deployments. The team has access to standard open-source security tooling. The organisation has identified which NIST 800-53 controls apply to their system.
Developing Areas
- Secure-by-default framework adoption is accelerating, with memory-safe languages (Rust, Go, Swift) gaining traction for security-critical components. The US government (ONCD, CISA) has explicitly recommended migration away from memory-unsafe languages like C and C++. However, the existing codebase in C/C++ is measured in billions of lines, rewrite economics are prohibitive for most organisations, and the developer talent pool for Rust remains limited. The practical path for most teams is incremental adoption of memory-safe languages for new components rather than wholesale migration.
- Runtime Application Self-Protection (RASP) effectiveness remains debated in the security community. RASP promises to detect and block attacks from within the application at runtime, but production deployments report performance overhead of 5-15%, high false positive rates for complex business logic, and difficulty distinguishing legitimate edge cases from attack patterns. The market is consolidating around fewer vendors, and most organisations that deploy RASP use it in monitoring mode rather than blocking mode, limiting its protective value.
- Interactive Application Security Testing (IAST) adoption is growing as a middle ground between SAST (high false positives, no runtime context) and DAST (limited code coverage, slow feedback). IAST instruments the running application to observe data flow and detect vulnerabilities with runtime context, producing fewer false positives than SAST. However, IAST requires instrumentation agents that may not support all language runtimes, and coverage depends on test suite quality. The tool category is approximately 5 years behind SAST/DAST in ecosystem maturity.
- Software composition analysis (SCA) accuracy is improving but still generates significant noise. The core challenge is determining whether a vulnerable dependency is actually reachable in the application's execution path -- a library may contain a vulnerable function that the application never calls. Reachability analysis is an emerging SCA capability that reduces false positives by 40-60% in early implementations, but it requires deep language-specific analysis that not all SCA tools support across all ecosystems.
- AI-generated code security baseline is an urgent developing area as adoption of coding assistants (GitHub Copilot, Cursor, Claude) reaches 50%+ among professional developers. Studies show AI-generated code contains security vulnerabilities at rates comparable to human-written code, but the speed of generation means more vulnerable code reaches review faster. Establishing security scanning gates specifically for AI-assisted development -- mandatory SAST on all AI-generated suggestions, automated secret detection, and dependency validation -- is becoming a critical pipeline requirement that most organisations have not yet formalised.