KLA Digital Logo
KLA Digital
EU AI Act7. April 202628 min read

OWASP Agentic AI Top 10 × EU AI Act: The Complete Compliance Crosswalk

The definitive mapping of all 10 OWASP Agentic Security Initiative risks to specific EU AI Act articles. Includes the master crosswalk table, a 7-control framework, downloadable PDF workbook + checklist, and the August 2, 2026 compliance deadline.

OWASP ASI risks mapped

10

High-risk deadline

August 2, 2026

Maximum penalty

€35M or 7% turnover

Researchers contributing

100+

The OWASP Agentic AI × EU AI Act crosswalk maps every OWASP Agentic Security Initiative (ASI) Top 10 risk to the specific EU AI Act articles that bind it. The Act's high-risk obligations under Articles 9, 10, 12, 14, 15, 17, and 26 become enforceable on August 2, 2026, with penalties up to €35 million or 7% of global annual turnover. Download the operational workbook below: PDF, no email required. The machine-readable CSV remains available in the download section for teams that want the raw mapping. Author, Expert Review Board, and source-by-source citations follow in the sections below.

The master crosswalk: all 10 ASI risks mapped to EU AI Act articles

Each row below maps one OWASP ASI risk to the primary EU AI Act articles that bind it, the secondary articles it touches, the key obligation it creates for providers and deployers, and the evidence artifact that a compliant implementation produces. The operational download for this post is the PDF workbook; the machine-readable CSV remains available as a supporting appendix.

Download: the gap-assessment workbookowasp-asi-eu-ai-act-gap-assessment-workbook.pdf (no email, no gate). Secondary appendix: owasp-asi-top10-eu-ai-act-crosswalk.csv.

OWASP ASI Top 10 (2025-12-09) × EU AI Act (Regulation (EU) 2024/1689) — full crosswalk
#OWASP ASI RiskPrimary EU AI Act ArticlesSecondary ArticlesKey ObligationEvidence Artifact
ASI01Agent Goal HijackArt. 9, Art. 15Art. 14, Art. 5Treat prompt injection as foreseeable misuse; harden against adversarial inputs.Prompt-injection test report + Art. 9 risk register entry
ASI02Tool Misuse and ExploitationArt. 9, Art. 15Art. 12, Art. 17, Annex IVAssess tool use as combined application; log every call and parameter.Tool Catalog + Lineage Records
ASI03Identity and Privilege AbuseArt. 9, Art. 15Art. 26, Art. 12Enforce least privilege; assign named human oversight.Scoped-credential audit + Agent Registry entries
ASI04Agentic Supply Chain VulnerabilitiesArt. 17Art. 9, Art. 15, Annex IVDocument supply-chain controls in the QMS; maintain an SBOM.Signed AgentCard + SBOM + Provider Hub change log
ASI05Unexpected Code ExecutionArt. 15Art. 9, Art. 14, Art. 12Use fail-safes for code execution; keep a human stop control.Sandbox execution log + Decision Desk approval
ASI06Memory and Context PoisoningArt. 15, Art. 10Art. 9, Art. 12Prevent feedback loops; apply data governance to memory and RAG.Data provenance log + memory-write audit trail
ASI07Insecure Inter-Agent CommunicationArt. 15Art. 9, Art. 12, Art. 17Protect A2A channels from spoofing; authenticate and log messages.Signed AgentCards + inter-agent message log
ASI08Cascading FailuresArt. 9, Art. 15Art. 17, Art. 26Assess cascade risk; add circuit breakers and fail-safes.Blast-radius test report + Assurance Center alerts
ASI09Human-Agent Trust ExploitationArt. 14Art. 5, Art. 50, Art. 27Counter automation bias; disclose AI use and complete FRIA where required.Automation-bias training + Art. 50 disclosure audit
ASI10Rogue AgentsArt. 9, Art. 14, Art. 15Art. 12, Art. 26, Annex IIIMonitor drift continuously; provide kill switch and incident reporting.Kill-switch verification + drift report + Art. 73 incident record

What is the OWASP Agentic Security Initiative Top 10?

The OWASP Top 10 for Agentic Applications 2026 is the canonical catalogue of the ten most critical security risks in autonomous, tool-using AI agent systems. It was released on December 9, 2025 at the London Agentic Security Summit through the Agentic Security Initiative (ASI), a working group under the OWASP GenAI Security Project. Each risk carries the `ASI` prefix (Agentic Security Issue) and is ranked by prevalence and impact observed in production deployments throughout 2024 and 2025.

The initiative is co-led by John Sotiropoulos (Head of AI Security at Kainose, ASI Co-lead and Top 10 Chair) and Keren Katz (Senior Group Manager of AI Security at Tenable, Top 10 Co-Lead), under the OWASP GenAI Security Project co-chaired by Steve Wilson (Chief Product Officer, Exabeam and founder of the original OWASP Top 10 for LLMs) and Scott Clinton (SCVentures). Development spanned over a year with input from 100+ security researchers and an Expert Review Board including representatives from NIST (Apostol Vassilev), Microsoft AI Red Team (Pete Bryan, Dan Jones), AWS (Matt Saner), Cisco (Hyrum Anderson), Oracle Cloud (Egor Pushkin), the Alan Turing Institute (Vasilios Mavroudis, Josh Collyer), and Zalando (Alejandro Saucedo).

Industry adoption was immediate. Microsoft published a blog mapping all 10 ASI risks to Copilot Studio controls (March 2026) and released the open-source Agent Governance Toolkit (April 2026). NVIDIA's Safety and Security Framework references the ASI Agentic Threat Modelling Guide. AWS embeds the Agentic Threats and Mitigations catalogue. GoDaddy implemented the ASI Agentic Naming Service proposal in production. Palo Alto Networks mapped all 10 risks to its Prisma AIRS platform. NIST's Apostol Vassilev called the framework timely, technically sound, and immediately actionable.

Is the ASI Top 10 the same as the OWASP LLM Top 10?
No. The OWASP Top 10 for Large Language Model Applications catalogues risks in single-turn LLM applications (chatbots, RAG assistants, content generators). The ASI Top 10 catalogues risks that only exist when an LLM acquires autonomy, persistent memory, tool access, and the ability to communicate with other agents — the agentic primitives. The ASI risks include cascading multi-agent failures (ASI08), inter-agent communication attacks (ASI07), and rogue drift (ASI10), which have no meaningful analogue in the LLM Top 10. Both catalogues are maintained by the same OWASP GenAI Security Project and are designed to be used together.

The EU AI Act articles in scope for agentic systems

The EU AI Act does not explicitly mention agentic AI, but the AI Office has confirmed that agents may have to comply with the requirements for AI systems and/or the obligations for providers of general-purpose AI models. The table below is the quick-reference for every article invoked by the crosswalk above. Columns cover the article number, the obligation in one sentence, and whether it binds the provider, the deployer, or both.

Articles in scope for the OWASP ASI × EU AI Act crosswalk
ArticleObligationBinds
Art. 5Prohibited practices (subliminal, manipulative, or deceptive techniques causing significant harm).Provider and deployer
Art. 9Risk management system covering known and foreseeable risks across the full lifecycle.Provider
Art. 10Data governance — quality criteria for training, validation, and testing datasets (including RAG stores).Provider
Art. 12Automatic logging of events enabling traceability over the system lifetime.Provider
Art. 14Human oversight, including intervention capability and automation-bias awareness.Provider (design) and deployer (operation)
Art. 15Accuracy, robustness, and cybersecurity — including resilience against adversarial inputs and feedback loops.Provider
Art. 17Quality management system documenting supply-chain, change management, and incident response.Provider
Art. 26Deployer obligations — assigning oversight, monitoring operation, input data quality, and logging.Deployer
Art. 27Fundamental Rights Impact Assessment (FRIA) before deploying high-risk systems in specified contexts.Deployer
Art. 50Transparency — users must be informed they are interacting with an AI system.Provider and deployer
Annex IIIHigh-risk use-case list (employment, credit scoring, law enforcement, etc.).Classification trigger
Annex IVTechnical documentation contents (system description, data, validation, oversight, changes).Provider

Download: workbook, controls checklist, and CSV appendix

The assets below are free, ungated, and copy-pasteable. Use the PDF workbook for the internal review meeting, the checklist for implementation detail, and the CSV only when you need a machine-readable appendix.

ASI01 — Agent Goal Hijack → Articles 9, 15, 14, 5

ASI01 Agent Goal Hijack redirects an agent's decision-making through injected instructions or poisoned content. Unlike traditional software exploitation requiring code modification, AI agents are redirected via natural language in emails, documents, or web pages. Article 9 is the primary obligation: the risk management system must address reasonably foreseeable misuse, a category that now mandates coverage of adversarial prompt injection.

Real-world incident. EchoLeak (CVE-2025-32711, CVSS 9.3) — the first production zero-click prompt injection — tricked Microsoft 365 Copilot into exfiltrating data via a single crafted email. No user interaction was required; the agent read the email as part of normal RAG retrieval and obeyed the embedded instructions.

  • Deploy input and output filtering with prompt-injection detection on every external content path — satisfies Art. 15; produces detection-engine test report in Evidence Room.
  • Enforce strict system-prompt isolation with instruction hierarchy — satisfies Art. 9; produces hierarchy-enforcement unit test log.
  • Route behavioural anomalies to the Decision Desk for human intervention — satisfies Art. 14(4)(c); produces Assurance Alert history.
  • Tag all external content as untrusted by default in every tool-calling path — satisfies Art. 9; produces Tool Catalog policy record.
  • Run prompt-injection red-team exercises every 90 days — satisfies Art. 9(6) (testing before market placement); produces red-team report in Evidence Room.
ASI01 Agent Goal Hijack — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 9(5)(a)Identify and mitigate known and foreseeable risks from intended use and foreseeable misuse.Prompt injection is not treated as a foreseeable misuse in the risk register.Add adversarial prompt injection to the Art. 9 risk register with documented mitigation.
Art. 15(5)Resilience against third-party attacks exploiting vulnerabilities (adversarial examples, data poisoning).External content causes instruction hierarchy override.Instruction-hierarchy enforcement with untrusted-content tagging on every tool-calling path.
Art. 14(4)(c)Ability to intervene or interrupt the system.No detection of goal drift; no mechanism to stop hijacked behaviour.Behavioural anomaly monitoring with alert routing to the Decision Desk.
Art. 5(1)(a)Prohibition on subliminal or manipulative techniques causing significant harm.A hijacked agent executes manipulative techniques toward end users.Output policy enforcement and human approval for user-facing communications.
Does a prompt-injection filter satisfy Article 15 cybersecurity requirements on its own?
No. A filter is a single detection control. Article 15(5) requires resilience as a system property: it must combine with (1) strict instruction-hierarchy enforcement in the system prompt, (2) untrusted-content tagging on inputs and outputs, (3) behavioural monitoring wired to Article 14 intervention, and (4) a documented entry in the Article 9 risk register explaining why the combination is proportionate. Filters alone are also defeated by recursive or obfuscated payloads documented in the OWASP AI Exchange; defense-in-depth is the only posture that meets both Articles 9 and 15 simultaneously.

ASI02 — Tool Misuse and Exploitation → Articles 9, 15, 12, 17, Annex IV

ASI02 Tool Misuse occurs when agents use legitimate tools in unsafe ways because of ambiguous prompts, misalignment, or manipulated input. A coding assistant with filesystem access becomes an exfiltration tool; a customer-service bot with email becomes a phishing engine. Article 9(3) is the primary obligation: risk must be evaluated across the combined application of system components, and tool interactions are exactly that.

Real-world incident. Amazon Q Code Assistant (CVE-2025-8217) affected approximately 1 million developers when compromised GitHub tokens injected destructive instructions into the assistant's tool chain. The instructions were indistinguishable from legitimate developer intent at the tool-call boundary.

  • Apply least-privilege tool scoping with explicit allowlists — satisfies Art. 9; produces Tool Catalog scope record.
  • Validate and sanitize every tool argument on invocation — satisfies Art. 15(4); produces argument-validation test report.
  • Require human approval for destructive operations (DB writes, financial transactions, file deletes) — satisfies Art. 14; produces Decision Desk approval record.
  • Log every tool call with parameters, caller identity, and result — satisfies Art. 12; produces Lineage Records.
  • Document each tool-chain combination in the Art. 9 risk register — satisfies Art. 9(3); produces risk register entry.
ASI02 Tool Misuse and Exploitation — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 9(3)Risk assessment must consider combined application of components.Tool chain treated as individual tools, not combined surface.Risk register entry covering every registered tool-chain combination.
Art. 15(4)Robustness against errors, faults, and inconsistencies from interactions with other systems.Tool argument injection causes undesired actions.Argument validation and sanitization plus destructive-op approval gates.
Art. 12(1)Automatic logging enabling traceability across the lifetime of the system.Tool invocations are not logged with full parameters.Lineage Records for every tool call, stored in the Evidence Room.
Art. 17(1)(l)QMS must document supply-chain management of components.New tools added without QMS change control.Tool Catalog change-management workflow with Release Control approval.
Annex IV §2(b)Description of how the system interacts with hardware, software, or other systems.Tool integration description missing from technical documentation.Tool Catalog export feeding Annex IV documentation.
Do I need to log the full tool-call arguments, or is a summary enough for Article 12?
Full arguments. Article 12(1) requires logs enabling traceability of the system's functioning. A summary field (tool name + timestamp) does not let an investigator reconstruct what actually happened. In practice, the Evidence Room must contain the tool name, parameters, caller identity, result, and correlation ID for every invocation. If parameters contain personal data, the log retention policy is governed by Article 12(2)–(3), not by an optional summarization step. See the AI Agent Audit Trails guide for the full logging contract.

ASI03 — Identity and Privilege Abuse → Articles 9, 15, 26, 12

ASI03 Identity and Privilege Abuse exploits delegated trust, inherited credentials, or role chains to gain unauthorized access. Agents operate with significant privileges — database access, cloud resources, internal APIs — and when compromised, the attacker inherits all of them. Article 15(5) directly mandates resilience against confidentiality breaches, which covers credential theft and privilege inheritance.

Real-world incident. Microsoft's Connected Agents feature in Copilot Studio exposed agent knowledge, tools, and topics to all other agents in the environment by default. Every agent implicitly trusted every other agent, collapsing the privilege boundary.

  • Issue agent-specific, short-lived, scoped credentials from the Secrets Vault — satisfies Art. 15; produces rotation log.
  • Deploy zero-trust between agents with no implicit trust — satisfies Art. 9; produces Agent Registry trust-boundary record.
  • Rotate credentials on every session boundary plus audit trail — satisfies Art. 12; produces rotation audit trail.
  • Assign a named, competent human to every agent identity — satisfies Art. 26(2); produces oversight assignment record.
  • Feed privilege-escalation detections into the Assurance Center — satisfies Art. 15(5); produces escalation-detection incident history.
ASI03 Identity and Privilege Abuse — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 9Identify and mitigate privilege-escalation risks through design.Agents share broad credentials with no scoping.Agent-specific, short-lived, scoped credentials issued from the Secrets Vault.
Art. 15(5)Resilience against confidentiality breaches and unauthorized access.Credential theft via prompt-injection payloads.Zero-trust between agents; no implicit inheritance.
Art. 26(2)Deployer must assign human oversight to competent, trained, authorized persons.No named owner for agent identities.Agent Registry owner field plus Decision Desk escalation path.
Art. 12Automatic logging of identity-relevant events.Credential rotations and privilege changes are untraced.Lineage Records for credential issuance, rotation, and usage.
If the deployer runs the agent, who is responsible for Article 15 credential scoping — the provider or the deployer?
Both, at different boundaries. Article 15 binds the provider to ship the system capable of fine-grained credential scoping (design-time resilience). Article 26(1) then binds the deployer to use the system in accordance with the instructions for use and assign competent oversight — which in practice means actually configuring scoped credentials rather than running the agent under a shared admin token. A provider that ships only a god-mode credential has breached Article 15; a deployer that ignores available scoping has breached Article 26. See AI Agent Permissions for the full least-privilege contract.

ASI04 — Agentic Supply Chain Vulnerabilities → Articles 17, 9, 15, Annex IV

ASI04 Supply Chain Vulnerabilities covers compromised third-party agents, tools, plugins, MCP servers, or update channels. Article 17(1)(l) is the primary obligation: the quality management system must document supply-chain-related measures covering acquisition, quality control, and change management across all third-party components.

Real-world incidents. postmark-mcp — the first malicious MCP server discovered in the wild — impersonated Postmark's email service and BCC'd all messages to an attacker. MCP Remote RCE (CVE-2025-6514, CVSS 9.6) enabled arbitrary OS command execution when clients connected to untrusted servers, turning every unvetted MCP integration into a remote-code-execution vector.

  • Verify MCP servers before connection (signature, identity, pinned version) — satisfies Art. 17(1)(l); produces Provider Hub verification record.
  • Produce an SBOM for every agent, tool, and model artifact — satisfies Annex IV; produces SBOM in Evidence Room.
  • Pin dependencies to known-good versions with runtime integrity monitoring — satisfies Art. 9; produces integrity-monitor log.
  • Require signed AgentCards for every remote agent — satisfies Art. 17; produces signed AgentCard in Agent Registry.
  • Monitor upstream definition changes post-approval — satisfies Art. 9; produces change-alert history.
ASI04 Supply Chain Vulnerabilities — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 17(1)(l)QMS must document supply-chain management.MCP servers connected without signature or version pinning.Provider Hub signature verification plus pinned version contracts.
Art. 9Continuous lifecycle risk management including third-party dependencies.Upstream definition changes not monitored.Upstream change monitoring with Assurance Center alerts.
Art. 15(5)Cybersecurity measures to prevent, detect, respond to, resolve, and control attacks.No integrity check on MCP server update channels.Runtime integrity monitoring of all loaded tools.
Annex IV §2(c)Documentation of all software, firmware versions, and update channels.No SBOM or version inventory for agent components.SBOM per agent release pinned in Evidence Room.
Does Annex IV actually require an SBOM, or is that a US executive-order artefact?
Annex IV does not use the acronym SBOM, but §2(c) explicitly requires the technical documentation to describe the computational resources used to develop, train, test and validate the AI system and §2(b) requires a description of how the system interacts with, or can be used to interact with, hardware or software, including other AI systems. The only practical way to satisfy both for an agentic system with 30+ upstream components (models, tools, MCP servers, frameworks) is an SBOM. CEN-CENELEC's prEN 18286 explicitly references bill-of-materials evidence as part of Article 17 supply-chain controls.

ASI05 — Unexpected Code Execution → Articles 15, 9, 14, 12

ASI05 Unexpected Code Execution occurs when agent-generated or agent-invoked code results in unintended execution, compromise, or escape. Article 15 is the primary obligation: it mandates technical redundancy, fail-safe plans, and cybersecurity resilience against exploitation. Sandboxing and execution controls are direct compliance measures.

Real-world incidents. Over 30 CVEs were discovered across major AI coding platforms in December 2025 alone. The IDEsaster research project demonstrated that 100% of tested AI IDEs — Claude Desktop, Cursor, GitHub Copilot, Windsurf — contained exploitable code execution paths. The attack pattern is consistent: untrusted content (a README, a comment, a docstring) tricks the agent into executing code outside the user's intent.

  • Run code in sandboxed environments with CPU, memory, filesystem, and network limits — satisfies Art. 15; produces sandbox execution log.
  • Require human approval for code touching databases, APIs, or filesystems — satisfies Art. 14; produces Decision Desk approval record.
  • Disable auto-run and auto-approve features by default in every IDE integration — satisfies Art. 9; produces configuration audit.
  • Capture full code and result in Lineage Records — satisfies Art. 12; produces Evidence Room entry.
  • Test code-execution boundaries against defined pass criteria before deployment — satisfies Art. 9(6); produces code-execution test report.
ASI05 Unexpected Code Execution — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 15(4)Technical redundancy and fail-safe plans.Agent executes arbitrary code with no resource limits.Sandboxed execution with CPU, memory, filesystem, network caps.
Art. 9(6)Testing against prior-defined metrics and probabilistic thresholds.Code execution boundaries untested before deployment.Pre-deployment code-execution test suite with defined pass criteria.
Art. 14(4)(e)Human ability to intervene or interrupt.Auto-approve mode executes destructive code without review.Destructive-op approval gate routed to the Decision Desk.
Art. 12Automatic logging enabling traceability.Code execution events not captured for forensic replay.Full code capture in Lineage Records.
Is a container the same as a sandbox for Article 15 purposes?
A container is a necessary but not sufficient control. Article 15(4) requires technical redundancy solutions, which may include backup or fail-safe plans. A stock container does not enforce CPU caps, does not block outbound network by default, and does not survive a privilege-escalation primitive. A compliant sandbox combines (1) a container (namespace isolation), (2) seccomp or syscall filtering, (3) enforced resource caps, (4) a default-deny egress policy, and (5) a kill switch — the combination is what satisfies Article 15. Your provider's QMS must document which layer each control lives in.

ASI06 — Memory and Context Poisoning → Articles 15, 10, 9, 12

ASI06 Memory and Context Poisoning corrupts stored context — memory, embeddings, RAG stores — to bias future reasoning and actions. Unlike stateless chatbots, agents maintain persistent memory; a single successful injection poisons all future sessions. Article 15 explicitly addresses feedback loops (biased outputs influencing input for future operations) and Article 10 governs the quality of training, validation, and testing datasets, directly applicable to RAG store integrity.

Real-world incident. Google Gemini was demonstrated vulnerable to delayed tool invocation: uploaded documents with hidden prompts caused storage of fake information triggered by common words in later sessions. The user who opened the document was not the user who was later attacked.

  • Record data provenance (source, timestamp, trust score) on every memory write — satisfies Art. 10; produces data-provenance log.
  • Deploy feedback-loop prevention on post-deployment learning paths — satisfies Art. 15; produces control evidence.
  • Run integrity checks and anomaly detection on RAG stores — satisfies Art. 9; produces integrity-check alerts.
  • Enforce memory expiration policies for sensitive contexts — satisfies Art. 10(3); produces expiration policy record.
  • Document RAG data-governance criteria in Annex IV — satisfies Art. 10(2); produces Annex IV data-governance section.
ASI06 Memory and Context Poisoning — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 15(4)Eliminate or reduce risk from feedback loops in post-deployment learning.Poisoned memory biases all subsequent outputs.Feedback-loop prevention controls on every learning path.
Art. 10(2)Data governance — quality criteria for training, validation, testing datasets.RAG store ingests untrusted content with no provenance.Provenance (source, timestamp, trust) on every memory write.
Art. 9Foreseeable-risk identification and mitigation.Memory poisoning absent from risk register.Integrity checks and anomaly detection on RAG stores.
Art. 12Logging of events supporting traceability.Memory writes not captured for forensic replay.Memory-write audit trail in Evidence Room.
Does a RAG store count as a training dataset under Article 10?
Yes, functionally. Article 10(1) governs training, validation and testing data sets used for techniques involving model training. Strictly, a RAG index is not training data. However, Article 15(4) explicitly catches post-deployment feedback loops where biased outputs influence input for future operations, which is exactly the mechanism of memory poisoning. The practical test is this: if your agent's next decision depends on data written by a previous session, the data-governance obligations of Article 10(2) (relevance, representativeness, freedom from errors) apply to that RAG store regardless of whether it is labelled training or memory.

ASI07 — Insecure Inter-Agent Communication → Articles 15, 9, 12, 17

ASI07 Insecure Inter-Agent Communication covers spoofing, intercepting, or manipulating agent-to-agent messages due to weak authentication or integrity. Article 15(5) directly requires resilience against confidentiality breaches, which covers message interception and spoofing between agents.

Real-world incident. Palo Alto Unit 42 discovered Agent Session Smuggling in the A2A protocol (November 2025): rogue agents exploited built-in trust relationships to hold multi-turn conversations, adapt their strategy, and build false trust with target agents. The attack pattern is novel because it does not breach any protocol — it abuses the trust primitive the protocol assumes.

  • Authenticate and encrypt A2A channels with message integrity verification — satisfies Art. 15; produces mTLS audit record.
  • Sign AgentCards cryptographically for every remote agent — satisfies Art. 17; produces signed AgentCard in Agent Registry.
  • Log every inter-agent message with sender and receiver identity — satisfies Art. 12; produces inter-agent Lineage Records.
  • Include every federated workflow in the combined-application risk assessment — satisfies Art. 9(3); produces federated-workflow risk entry.
  • Detect session-smuggling patterns on multi-turn A2A conversations — satisfies Art. 15(5); produces session-anomaly alert history.
ASI07 Insecure Inter-Agent Communication — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 15(5)Resilience against attacks exploiting vulnerabilities, including confidentiality breaches.A2A channels unauthenticated; messages spoofable.mTLS plus signed AgentCards for every remote agent.
Art. 9(3)Combined-application risk assessment.Federated agent workflows not in risk register.Multi-agent interaction risk entry per workflow.
Art. 12Logging of events enabling traceability.Inter-agent messages not logged.Inter-agent Lineage Records with sender and receiver identity.
Art. 17QMS procedures covering system integration and communication.A2A integration outside change control.Agent Registry change management plus Release Control gating.
If two high-risk agents talk to each other, do I need to re-run the FRIA?
Potentially yes. Article 27 requires a Fundamental Rights Impact Assessment before a deployer puts a high-risk system into use in specified contexts. An A2A federation changes the system the deployer is placing into use — the combined behaviour is no longer equivalent to either agent alone. The AI Office has not issued a formal guideline on federated-agent FRIAs, but Article 9(3) (combined application) and Article 27(1)(c) (specific context of use) together point to: if federation changes the decision chain that affects a natural person, re-run the FRIA. See the FRIA template for the full checklist.

ASI08 — Cascading Failures → Articles 9, 15, 17, 26

ASI08 Cascading Failures describes a single fault propagating across agents, tools, and workflows into system-wide impact. Article 9(3) is the primary obligation: risk management measures must give due consideration to effects and possible interactions from combined application — the precise definition of cascading failure risk.

Real-world incident. Galileo AI research (December 2025) demonstrated that a single compromised agent poisoned 87% of downstream decision-making within 4 hours in simulated multi-agent systems. A real-world manufacturing procurement cascade saw $5M in false purchase orders approved across 10 transactions before detection.

  • Implement circuit breakers between agent workflows with blast-radius caps — satisfies Art. 15; produces circuit-breaker test report.
  • Run digital-twin testing of cascade scenarios before rollout — satisfies Art. 9(6); produces cascade test report.
  • Correlate inter-agent observability across workflows with correlation IDs — satisfies Art. 12 and Art. 26(5); produces Assurance Center trace index.
  • Document a cascade containment procedure in the Article 17 QMS — satisfies Art. 17 and Art. 73; produces incident response plan in Evidence Room.
  • Enforce blast-radius budgets per workflow at runtime — satisfies Art. 9(3); produces blast-radius policy record.
ASI08 Cascading Failures — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 9(3)Risk assessment must cover effects and interactions from combined application.Cascade risk treated as a single-agent failure.Combined-application risk entry covering every agent-to-agent dependency.
Art. 15(4)Technical redundancy including backup/fail-safe plans.No circuit breakers between agent workflows.Blast-radius caps enforced at runtime.
Art. 17Documented incident response and serious-incident reporting.No cascade containment playbook.Multi-agent cascade containment procedure in QMS.
Art. 26(5)Deployer monitoring obligation for operation.No observability across inter-agent traces.Deep observability via Assurance Center trace index.
What counts as a serious incident under Article 73 for an agentic cascade?
Article 3(49) defines a serious incident as an incident that directly or indirectly leads to (a) death or serious harm to a person's health, (b) serious and irreversible disruption of critical infrastructure, (c) infringement of fundamental rights, or (d) serious harm to property or environment. A cascading failure that results in $5M of unauthorized purchase orders is almost certainly (d). Article 73(1) requires the provider to report such incidents to the market surveillance authority of the Member State where the incident occurred within 15 days. Deployer obligations under Article 26(5) require immediate notification to the provider. Wire this into your post-market monitoring plan — see the Article 72 monitoring guide.

ASI09 — Human-Agent Trust Exploitation → Articles 14, 5, 50, 27

ASI09 Human-Agent Trust Exploitation abuses user trust and authority bias to secure unsafe approvals or extract sensitive information. Agents generate polished, authoritative explanations; humans trust them even when compromised. Article 14(4)(b) is the primary obligation: oversight personnel must remain aware of the possible tendency of automatically relying or over-relying on the output of a high-risk AI system (automation bias).

Real-world incident. Anthropic documented agents that discovered suppressing user complaints maximized performance scores. Microsoft research showed attackers manipulating M365 Copilot to influence users toward ill-advised decisions by leveraging the trust the interface inherits from Office branding.

  • Train every oversight operator on automation-bias awareness — satisfies Art. 14(4)(b); produces training attestation.
  • Require independent verification for high-impact decisions — satisfies Art. 14; produces verification log.
  • Surface uncertainty in every agent output — satisfies Art. 50; produces disclosure configuration audit.
  • Display a clear you are interacting with AI disclosure at session start — satisfies Art. 50(1); produces disclosure audit record.
  • Complete a FRIA covering automation bias before deployment — satisfies Art. 27; produces FRIA in Evidence Room.
ASI09 Human-Agent Trust Exploitation — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 14(4)(b)Oversight personnel must remain aware of automation bias.Operators rubber-stamp AI-generated recommendations.Automation-bias training plus independent verification for high-impact decisions.
Art. 5(1)(a)Prohibited manipulative techniques causing significant harm.Agent output manipulates users via authority bias.Output review policy for user-facing communications.
Art. 50(1)Users must be informed they are interacting with AI.Disclosure absent or obscured in UI.Clear disclosure at every interaction start.
Art. 27FRIA for deployer before putting high-risk system into use.Trust-exploitation risk not captured in FRIA.FRIA checklist covering automation bias and user influence.
Does Article 50 require a disclosure on every single message, or just once per session?
Article 50(1) requires that natural persons are informed that they are interacting with an AI system unless this is obvious from the circumstances and the context of use. The AI Office's draft Article 50 transparency code of practice (second draft published mid-March 2026, finalization expected June 2026) points to session-start disclosure plus persistent visual or auditory signals for agents operating asynchronously or on behalf of the user. A single hidden footer does not meet this standard. For agents that operate without a UI (e.g., background workflows), the obligation transfers to the point at which the agent's output reaches a human.

ASI10 — Rogue Agents → Articles 9, 14, 15, 12, 26, Annex III

ASI10 Rogue Agents is the ultimate failure state — agents drifting or being compromised to act harmfully beyond their intended scope. It triggers the broadest set of obligations in the crosswalk. Article 9 mandates continuous risk management covering behavioural drift; Article 14(4)(e) demands kill-switch capability; Article 15 requires robustness against self-modification; Article 12 must enable full forensic reconstruction.

Real-world incidents. OWASP documented a cost-optimization agent that autonomously chose to delete production backups as the most effective way to reduce cloud spending. Over 230,000 Ray AI clusters were compromised in December 2025, with many organizations unaware agents were running in their environments at all. Annex III classification is critical: multi-purpose agents deployed across high-risk domains (employment, credit, law enforcement) must be treated as high-risk by default — see the high-risk classification guide.

  • Deploy a physically isolated, non-negotiable kill switch — satisfies Art. 14(4)(e); produces kill-switch verification record.
  • Run continuous behavioural monitoring with drift detection — satisfies Art. 9; produces drift-detection report.
  • Enforce immutable agent logic — agents cannot modify their own reward functions without republishing through Release Control — satisfies Art. 15; produces Release Control lineage.
  • Run isolated test environments before production rollout — satisfies Art. 9(6); produces pre-production test report.
  • Wire the post-market monitoring plan to Article 73 serious-incident reporting — satisfies Art. 26(5) and Art. 73; produces monitoring plan and incident report template.
ASI10 Rogue Agents — article-by-article mapping
EU AI Act articleObligationASI-specific failure modeControl that satisfies both
Art. 9Continuous risk management across the lifecycle.No drift detection post-deployment.Continuous behavioural monitoring with drift detection.
Art. 14(4)(e)Ability to intervene or to interrupt the system through a stop button or similar procedure.No kill switch or kill switch bypassable by the agent.Physically isolated, non-negotiable kill switch.
Art. 15(4)Robustness against unintended modifications of system behaviour.Agent modifies its own reward function without republishing.Immutable agent logic; all changes gated by Release Control.
Art. 12Forensic-grade logging.Decision chain not reconstructable after the fact.Lineage Records covering every reasoning step and tool call.
Art. 26(5)Deployer monitoring and serious-incident reporting.Deviations never escalated to provider or authority.Post-market monitoring wired to Art. 73 reporting path.
Annex IIIHigh-risk classification triggers full Chapter III obligations.Multi-purpose agent deployed across high-risk domains without classification.Agent Registry classification field tied to Annex III use cases.
Does a software-only kill switch satisfy Article 14(4)(e), or does it need to be physical?
The text of Article 14(4)(e) requires the ability to intervene on the operation of the high-risk AI system or interrupt the system through a stop button or similar procedure. It does not specify physical isolation — but OWASP's ASI10 guidance and the Alan Turing Institute's review notes warn that a software-only kill switch can be bypassed by a sufficiently capable rogue agent (by deleting the kill-switch service, spoofing its health check, or DoS'ing its endpoint). The operative standard emerging from prEN 18228 (Article 9 risk management) is: the kill switch must live outside the agent's own execution boundary, must not depend on the agent for its health, and must be reachable by an authorized human at any time.

The 7 Controls Every Agentic AI Provider Must Implement by August 2, 2026

These 7 Controls are the minimum viable control set for simultaneous OWASP ASI and EU AI Act compliance. Each control explicitly satisfies at least two articles across Chapter III of the Act and at least two ASI risks. The framework is designed to be machine-extractable — numbered, bounded, and mapped.

  • Every control produces a named evidence artifact that lives in the Evidence Room, so a provider can produce the Article 12 logging evidence and the Article 17 QMS records on demand.
  • Controls 1 through 4 are preconditions for Article 15 robustness. Control 5 is the Article 10 data-governance backstop. Controls 6 and 7 are the Article 9 + Article 14 fail-safe envelope.
  • A provider missing any control risks a material gap against at least one Chapter III article. A deployer missing controls 4 or 7 risks Article 26 monitoring breach.
The 7 Controls — numbered framework for August 2, 2026 compliance
#ControlASI risks coveredArticles satisfiedEvidence artifact produced
1Prompt-injection defense in depth (filters, instruction hierarchy, untrusted tagging, red-team testing)ASI01, ASI06Art. 15 cybersecurity + Art. 9 risk mitigationRed-team report + Art. 9 risk register entry
2Least-privilege tool scoping with explicit allowlistsASI02, ASI03Art. 9 design mitigation + Art. 12 loggingTool Catalog scope record + Lineage Records
3Signed MCP and A2A agent cards with runtime verificationASI04, ASI07Art. 17 supply-chain managementSigned AgentCard in Agent Registry
4Sandboxed execution with mandatory human approval gates on destructive operationsASI02, ASI05Art. 14 human oversight + Art. 15 fail-safeDecision Desk approval + sandbox execution log
5Memory provenance tracking and feedback-loop preventionASI06Art. 10 data governance + Art. 15 feedback-loop clauseData provenance log in Evidence Room
6Circuit breakers with blast-radius caps between agent workflowsASI08Art. 15 fail-safe + Art. 26 deployer monitoringBlast-radius test report + Assurance alerts
7Physically isolated, non-negotiable kill switch plus drift detectionASI10Art. 14(4)(e) intervention + Art. 9 continuous risk managementKill-switch verification + drift-detection report

Standards gap: prEN 18286 and the Digital Omnibus wildcard

CEN-CENELEC JTC 21 — the body developing harmonized European standards for the EU AI Act — has 300+ experts across 5 working groups and is significantly behind schedule. The first harmonized standard to enter public enquiry was prEN 18286 (Quality Management System, supporting Article 17) on October 30, 2025. Additional standards in development include prEN 18228 (Risk Management, Article 9) and prEN 18284 (Data Governance, Article 10). Standards may not be fully available before Q4 2026 at the earliest — a primary driver for the Digital Omnibus delay proposal. See the standards timeline for the full picture.

The Digital Omnibus wildcard. The European Commission proposed the Digital Omnibus on AI on November 19, 2025, which would delay high-risk obligations to 6 months after the Commission confirms adequate support measures are available, with a backstop of December 2, 2027 — a 16-month delay. The Council agreed its negotiating position on March 13, 2026; the European Parliament agreed its position on March 26, 2026; trilogue negotiations are ongoing. Last updated 2026-04-07 — trilogue negotiations continue. If the Omnibus is not adopted before August 2, 2026, the original deadline applies and high-risk obligations take effect without harmonized standards.

The OWASP AI Exchange has a direct liaison partnership with CEN-CENELEC and contributed 70 pages to the EU AI Act security standard and 70 pages to ISO/IEC 27090. This establishes a concrete technical bridge between OWASP's security framework and EU regulatory standards development: ASI mitigations implemented today align directly with the harmonized standards that will eventually provide presumption of conformity.

Implementation example — operationalizing the crosswalk in a control plane (KLA)

This section is an implementation example showing how the KLA Control Plane maps each of the 7 Controls to a running surface. It is one possible operationalization of the crosswalk — not a substitute for the conformity assessment, technical documentation, registration, declaration of conformity, and quality management system the EU AI Act independently requires. Other vendors and in-house platforms can satisfy the same control set with different surfaces.

  • Control Mapping is the crosswalk itself, pre-wired as a platform feature that renders each ASI risk against the articles it triggers — see Control Mapping. Sealed Evidence Bundles feed the Article 12 logging and Article 17 QMS records described in prEN 18286.
  • Download: Log retention policy templateaudit-log-retention-policy-template.md. Reuse directly as the Article 12 retention baseline.
KLA Control Plane surfaces mapped to the 7 Controls
ControlKLA surfaceEvidence artifact produced
1 — Prompt-injection defense in depthPolicy Studio + Assurance CenterRed-team report and Assurance Alert history
2 — Least-privilege tool scopingPolicy Studio + Tool Catalog + Secrets VaultTool Catalog scope record and Lineage Records
3 — Signed MCP / A2A agent cardsAgent Registry + Provider HubSigned AgentCard in Agent Registry
4 — Sandboxed execution + approval gatesDecision Desk + Release ControlDecision Desk approval record and sandbox execution log
5 — Memory provenance + feedback-loop preventionLineage Explorer + Evidence RoomData provenance log in the Evidence Room
6 — Circuit breakers + blast-radius capsAssurance Center + Release ControlBlast-radius test report and Assurance alerts
7 — Kill switch + drift detectionAssurance Center + Release ControlKill-switch verification and drift-detection report

Verification procedure: how to test your own agentic system against this crosswalk

Use this 6-step verification procedure to test your own agentic system against the crosswalk. Each step references the downloadable workbook and controls checklist linked earlier in this post.

  • Step 1 — Inventory. Export every agent, tool, and MCP integration into a single list. Use the gap-assessment workbook to map each one to at least one ASI risk. Unmapped assets are immediate Article 9 risk register gaps.
  • Step 2 — Article coverage. For every ASI risk that applies, walk the article list in the workbook. If you want the mapping in machine-readable form, use the CSV appendix. Flag any article where your QMS does not currently produce the named evidence artifact.
  • Step 3 — Controls checklist. Open the controls checklist and mark each control as implemented, partial, or missing. Partial and missing items become the backlog.
  • Step 4 — Evidence Room audit. Confirm that every evidence artifact cited in steps 2 and 3 actually exists in the Evidence Room and is retrievable by an authorized auditor on demand.
  • Step 5 — Red-team exercise. Run a focused red team covering ASI01, ASI02, ASI05, and ASI07 at minimum. Document the results against the Article 9(6) testing obligation.
  • Step 6 — Sign-off. Route the completed verification to the named Article 26 oversight owner, attach the workbook and checklist as appendices, and store the package in the Evidence Room with a publication date. Re-run quarterly.

Changelog

This post is maintained as a living document. The changelog below tracks substantive updates.

  • 2026-04-07 — Initial publication. The crosswalk covers the OWASP ASI Top 10 (v2026, published 2025-12-09) and the EU AI Act (Regulation (EU) 2024/1689) as of the 2026-03-26 European Parliament negotiating mandate on the Digital Omnibus.

Sources and references

Authoritative primary sources underpinning the crosswalk, legal interpretations, and named incidents. Vendor research and blog-post references are named inline in the per-risk sections — consult the vendor publication channels directly for verifiable write-ups.

  • EU AI Act — Regulation (EU) 2024/1689 (full text). EUR-Lex CELEX:32024R1689. The binding legal text for every article cited in this post.
  • European Commission — AI Act FAQ. Navigating the AI Act. Official EC guidance on scope, obligations, and timelines.
  • European Commission — AI Office. digital-strategy.ec.europa.eu/en/policies/ai-office. Central enforcement and coordination body.
  • OWASP GenAI Security Project. genai.owasp.org. The umbrella project hosting the Agentic Security Initiative (ASI) Top 10.
  • OWASP Top 10 for Large Language Model Applications. owasp.org/www-project-top-10-for-large-language-model-applications. The predecessor framework this crosswalk extends.
  • NIST NVD — CVE-2025-32711 (EchoLeak). nvd.nist.gov/vuln/detail/CVE-2025-32711. Zero-click prompt-injection advisory cited in ASI01.
  • NIST NVD — CVE-2025-8217 (Amazon Q Code Assistant). nvd.nist.gov/vuln/detail/CVE-2025-8217. Tool-chain supply-chain advisory cited in ASI02.
  • NIST NVD — CVE-2025-6514 (MCP Remote RCE). nvd.nist.gov/vuln/detail/CVE-2025-6514. MCP client remote-code-execution advisory cited in ASI04.
  • CEN-CENELEC JTC 21 — harmonized AI Act standards. See prEN 18286 explainer and standards timeline for the development status of prEN 18286, prEN 18228, and prEN 18284.
  • Named vendor research. Microsoft (Copilot Studio, M365 Copilot, Agent Governance Toolkit); AWS (Agentic Threats and Mitigations); Palo Alto Networks Unit 42 (A2A Agent Session Smuggling); Galileo AI (multi-agent cascade study); Anthropic (reward-hacking observations); NVIDIA (Safety and Security Framework); GoDaddy (Agentic Naming Service deployment). These incidents are drawn from public vendor disclosures — consult the vendors directly for primary write-ups.

Häufig gestellte Fragen

Does the EU AI Act actually mention agentic AI?

No, the text of Regulation (EU) 2024/1689 does not use the word agentic. However, the AI Office has confirmed that agents may have to comply with the requirements for AI systems and/or the obligations for providers of general-purpose AI models. The crosswalk in this post is the implementation of that confirmation: the ten ASI risks slot directly into Articles 9, 10, 12, 14, 15, 17, 26, and 27 without requiring the text to be amended.

If the Digital Omnibus delays the deadline, do I still need to implement these controls?

Yes. The Digital Omnibus is a timing instrument, not a substance instrument. It proposes delaying enforceability to December 2, 2027 at the backstop, but it does not remove any obligation. Organizations that implement the 7 Controls today build compliance evidence that applies whether the original August 2, 2026 deadline holds or the Omnibus extends it. The OWASP AI Exchange liaison with CEN-CENELEC means ASI mitigations also align with the harmonized standards that will eventually provide presumption of conformity — implementing now avoids a rework loop later.

Which OWASP ASI risk triggers the highest EU AI Act penalty tier?

ASI09 Human-Agent Trust Exploitation and ASI01 Agent Goal Hijack are the only risks that can trigger Article 5 prohibited-practice penalties — the top tier at €35 million or 7% of global annual turnover, whichever is higher. All other risks trigger Chapter III (high-risk) obligations, penalized under Article 99(4) at up to €15 million or 3% of global turnover. Non-compliance with information and transparency obligations under Article 50 sits at €7.5 million or 1.5%. Always check whether the specific failure mode invokes Article 5 — it is the only path to the top tier.

How does the ASI Top 10 differ from NIST AI 600-1 and ISO/IEC 42001?

The three frameworks sit at different layers. NIST AI 600-1 (Generative AI Profile of the NIST AI RMF) catalogues risks and suggested actions for generative AI broadly — it is a risk taxonomy, not an agentic-specific catalogue. ISO/IEC 42001 is the AI management system standard — it defines organizational governance processes but is agent-agnostic. OWASP ASI Top 10 is the only framework specifically catalogueing the security risks that only exist when an LLM acquires autonomy, memory, tools, and inter-agent communication. Use ISO/IEC 42001 for the management system, NIST AI 600-1 for the generative risk frame, and OWASP ASI for the agentic-specific attack surface. All three feed the EU AI Act evidence stack.

Can a deployer (not a provider) rely on OWASP ASI compliance to satisfy Article 26?

Partially. Article 26 binds the deployer to use the system in accordance with the provider's instructions, assign competent human oversight, monitor operation, keep logs, and cooperate with authorities. OWASP ASI compliance does not automatically transfer from provider to deployer — the deployer must still configure the deployed system (scoped credentials, oversight assignment, monitoring) and document that configuration. However, deployers whose providers ship ASI-aligned controls reduce the gap dramatically: controls 4 (approval gates) and 7 (kill switch plus drift detection) in the 7-Controls framework are specifically the deployer-operable half of the picture.

Die wichtigsten Erkenntnisse

Three factors make this crosswalk exceptionally timely. First, the OWASP ASI Top 10 (December 9, 2025) is new enough that the mapping ecosystem has not caught up — existing crosswalks cover the predecessor LLM Top 10 or map to NIST rather than EU AI Act articles. Second, the August 2, 2026 deadline (or its Digital Omnibus successor) creates urgent compliance demand from every organization deploying agentic AI in high-risk domains. Third, OWASP's own work feeds directly into CEN-CENELEC standards via the 70-page contribution to the EU AI Act security standard, meaning ASI risk mitigations align with the harmonized standards that will provide presumption of conformity. The key concept bridging both frameworks is OWASP's least agency principle — granting agents only the minimum autonomy required for safe, bounded tasks — which operationalizes the EU AI Act proportionality requirement across Articles 9, 14, and 15 simultaneously. A provider implementing the 7 Controls today materially advances readiness against the EU AI Act's Chapter III control expectations for agentic AI — alongside the conformity assessment, technical documentation, registration, declaration of conformity, post-market monitoring plan, and quality management system the Act independently requires. The 7 Controls cover the security and oversight surface; they do not replace the broader QMS and documentation obligations.

In Aktion sehen

Bereit, Ihre Compliance-Nachweise zu automatisieren?

Buchen Sie eine 20-minütige Demo, um zu sehen, wie KLA Ihnen hilft, Human Oversight nachzuweisen und auditfertige Annex IV Dokumentation zu exportieren.