← Back to Learn
eu-ai-actcomplianceagent-safety

EU AI Act Article 9: risk management for AI agents

Authensor

Article 9 of the EU AI Act requires providers of high-risk AI systems to establish and maintain a risk management system. For AI agents, this means systematically identifying what can go wrong and implementing controls to reduce those risks.

What Article 9 requires

The risk management system must:

  1. Identify and analyze known and foreseeable risks associated with the AI system
  2. Estimate and evaluate the risks that may emerge when the system is used as intended and under conditions of reasonably foreseeable misuse
  3. Adopt appropriate risk management measures to address identified risks
  4. Test the system to ensure risk management measures are effective

This is a continuous process. The risk management system must be updated throughout the AI system's lifecycle, not created once and forgotten.

Risk identification for AI agents

AI agents introduce specific risks that traditional software does not:

  • Prompt injection: Attackers manipulate the agent through crafted input
  • Tool misuse: The agent uses legitimate tools in harmful ways
  • Privilege escalation: The agent gains access to resources beyond its authorization
  • Cascading failures: A problem in one agent propagates to others
  • Data exfiltration: The agent leaks sensitive information through tool calls
  • Uncontrolled autonomy: The agent takes actions beyond its intended scope

For each risk, document the likelihood, potential impact, and mitigation measures.

Implementing risk management measures

Map each identified risk to a concrete technical control:

| Risk | Mitigation | Implementation | |------|-----------|----------------| | Prompt injection | Content scanning | Aegis with prompt injection detectors | | Tool misuse | Argument-level policy rules | Policy engine with when conditions | | Privilege escalation | Least-privilege policies | Deny-by-default policy, explicit allow rules | | Data exfiltration | Output filtering | Aegis scanning on outbound tool calls | | Uncontrolled autonomy | Rate limits, approval workflows | Policy escalation rules, budget constraints |

Testing effectiveness

Article 9 requires testing that risk management measures actually work. For AI agents, this means:

  • Policy unit tests: Verify that each rule blocks what it should and allows what it should
  • Red team exercises: Attempt prompt injection and tool misuse against the deployed system
  • Shadow policy evaluation: Test new policies against production traffic before enforcing them
  • Behavioral monitoring: Track whether the safety measures are triggering as expected

Documentation

Document every identified risk, the assessment of its severity, the chosen mitigation, and the testing results. This documentation is part of the technical file required under Article 11 and will be reviewed during conformity assessment.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides