SOC 2 is an audit framework based on the Trust Services Criteria (TSC) from the AICPA. When you deploy AI agents as part of your service, auditors will ask how you control, monitor, and log the agent's actions. This guide maps SOC 2 requirements to AI agent safety controls.
AI agents need access controls just like human users. Document:
Policy-based tool authorization provides the access control layer. Each policy documents what tools are allowed and under what conditions. Policies are versioned and changes are tracked.
Demonstrate that you monitor agent operation and can detect failures:
Show that agent actions are executed correctly and completely:
Demonstrate controls over sensitive data the agent accesses:
"How do you control what the AI agent can do?" Show them your policy files and explain the policy engine. Demonstrate that unauthorized actions are blocked.
"How do you log agent activity?" Show them the receipt chain. Demonstrate chain verification. Explain retention policies.
"How do you detect if something goes wrong?" Show them Sentinel dashboards. Explain EWMA and CUSUM anomaly detection. Show alert routing.
"How do you manage changes to agent permissions?" Show them policy version history. Explain shadow evaluation for testing changes. Show approval processes for policy updates.
For SOC 2 audits, you need to provide evidence over the audit period (typically 12 months):
The control plane API provides exports for all of this data.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides