← Back to Learn
eu-ai-actcomplianceagent-safety

EU AI Act Article 15: cybersecurity requirements for AI agents

Authensor

Article 15 of the EU AI Act requires that high-risk AI systems achieve an appropriate level of accuracy, robustness, and cybersecurity. For AI agents, the cybersecurity requirement is particularly relevant because agents are exposed to adversarial attacks that traditional software is not.

What Article 15 requires

High-risk AI systems must be:

  • Resilient against attempts by unauthorized third parties to alter their use, outputs, or performance by exploiting system vulnerabilities
  • Designed to address risks arising from interaction with other AI systems, hardware, or software
  • Protected against attacks specific to AI systems, including data poisoning, adversarial examples, and confidentiality attacks

AI-specific attacks on agents

The Act explicitly calls out attacks specific to AI systems. For agents, these include:

Prompt injection: Manipulating the agent's behavior through crafted input. Both direct injection (user input) and indirect injection (embedded in documents, tool responses) are threats.

Data poisoning: Corrupting the data the agent uses for decisions. For agents with RAG, this means poisoning the vector store. For agents with persistent memory, this means corrupting stored facts.

Model extraction and inversion: Probing the agent to extract information about its system prompt, policies, or training data.

Adversarial examples: Crafting inputs that cause the agent to behave unexpectedly while appearing normal to humans.

Implementing cybersecurity for agents

| Threat | Defense | Implementation | |--------|---------|----------------| | Prompt injection | Content scanning | Aegis with prompt injection detectors | | Indirect injection | Inbound scanning | Scan tool responses and retrieved documents | | Tool manipulation | MCP gateway | Inspect all MCP traffic | | Privilege escalation | Least-privilege policies | Deny-by-default, explicit allow rules | | Data exfiltration | Output filtering | Scan outbound data for sensitive content | | Unauthorized access | Authentication | API key roles, rate limiting |

Defense in depth

Article 15's resilience requirement maps to a defense-in-depth strategy:

  1. Perimeter: Input scanning catches attacks before they reach the agent
  2. Enforcement: Policy engine blocks unauthorized actions regardless of the agent's intent
  3. Monitoring: Sentinel detects anomalous behavior patterns
  4. Audit: Receipt chain provides forensic evidence for investigation
  5. Response: Kill switch and policy hot-swapping for incident response

No single layer is sufficient. Article 15 requires that the system remain resilient even when individual defenses are bypassed.

Testing cybersecurity

The Act requires that cybersecurity measures are tested. For AI agents, this means:

  • Red team exercises that attempt prompt injection, tool misuse, and data exfiltration
  • Penetration testing of the control plane and MCP gateway
  • Adversarial testing of the content scanner against novel attack patterns
  • Regular vulnerability scanning of dependencies and infrastructure

Document all testing activities and results as part of the technical file required under Article 11.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides