OSA AI Security Suite: Five Patterns, One Architecture

OSA Core Team

When we published SP-027 in early February, it was OSA's first new security pattern in over a decade. It addressed a genuine gap: how to securely integrate AI agents into enterprise environments.

A structured gap analysis against MITRE ATLAS, the EU AI Act, ENISA's AI Threat Landscape, and NIST AI RMF exposed three problems with a single-pattern approach: topic overlap between independently written patterns, blind spots in areas the frameworks had not yet caught up with, and no clear reading order for practitioners trying to decide where to start.

The result is a five-pattern suite. Each pattern owns a distinct question, a distinct audience, and a distinct control set.

The Five Patterns

SP-027 Secure AI IntegrationHow do you secure an individual AI agent's operation?

The operational layer. Identity and authentication, prompt injection defence, tool authorisation and least privilege, audit logging, credential handling, and human-in-the-loop controls. The deployment model spectrum runs from chat-only assistants to autonomous multi-agent swarms — with escalating control requirements at each tier. This is where the agent meets the system.

SP-045 AI Governance and Responsible AIHow do you govern AI across the enterprise?

The management system layer. Training data governance, model lifecycle gates, bias and fairness assessment, transparency and explainability obligations, AI impact assessments, and ISO 42001 alignment. If SP-027 is about securing individual deployments, SP-045 is about the governance programme that decides what gets deployed and under what conditions. Primary audience: CISO, risk, compliance, legal.

SP-047 Secure Agentic AI FrameworksHow do you adopt agentic AI infrastructure safely at scale?

Enterprise infrastructure security for LangChain, CrewAI, AutoGen, LangGraph and their successors. Agent execution isolation, tool registry governance, guardrails architecture, RAG pipeline security, multi-agent communication trust, and framework supply chain. This is what the security architect needs to assess when the business wants to deploy an orchestration platform.

SP-048 Offensive AI and Deepfake DefenceHow do you defend against AI used as a weapon?

The attack-vector pattern — distinct from the three above, which address AI you deploy and secure. Deepfake executive impersonation, AI-generated spear-phishing at scale, synthetic identity fraud, voice cloning for vishing, AI-accelerated exploit development, and adversarial threat intelligence. Detection technologies, verification protocols, and policy controls exist for each of these; they needed a dedicated home.

SP-049 AI in Security OperationsHow do you use AI in your SOC without introducing new risks?

Covers defensive AI — SIEM with ML detection, AI-assisted incident triage, AI-generated threat intelligence — and the specific risks this introduces: detection model evasion, hallucination in security-critical decisions, over-reliance degrading analyst capability, and adversarially crafted log lines designed to suppress detections. Distinct audience from the other four: detection engineers and SOC architects rather than application or enterprise architects.

The Reading Order

The three deployment patterns form a clear stack:

  • SP-027 for individual agent security
  • SP-047 for framework infrastructure
  • SP-045 for enterprise governance

SP-048 and SP-049 are orthogonal — they address the offensive AI threat surface and the defensive AI operations surface respectively.

A security architect starting a new AI programme should read SP-045 first for the governance framework, then SP-027 when designing a specific integration, then SP-047 when the business wants a framework. SP-048 and SP-049 come in whenever threat modelling or SOC capability is in scope.

Why Five Patterns

The gap analysis identified seven instances of the same topic appearing in two or three patterns simultaneously — prompt injection, shadow AI, human oversight, multi-agent trust, model supply chain, memory persistence, and audit logging. That duplication was a signal: the boundary conditions were not right.

A single AI security pattern cannot serve an application architect securing a deployment, a CISO building a governance programme, and a SOC architect operationalising AI detection at the same time. The questions are different, the audiences are different, the control families are different.

Five patterns with clear ownership and explicit cross-references is a more honest architecture than one large pattern that attempts to cover everything.

The OSA Core Team