Authensor is an open-source safety stack for AI agents. It provides policy evaluation, content scanning, approval workflows, behavioral monitoring, and cryptographic audit trails. This guide gets you from zero to enforcing your first policy.
For TypeScript:
pnpm add @authensor/sdk
For Python:
pip install authensor
Create a file called policy.yaml in your project root:
version: "1"
rules:
- tool: "*"
action: allow
- tool: "shell.execute"
action: block
when:
args.command:
matches: "rm -rf|shutdown|reboot"
reason: "Destructive shell commands are blocked"
- tool: "*.delete"
action: escalate
reason: "Delete operations require human approval"
This policy allows all tool calls by default, blocks destructive shell commands outright, and escalates any delete operation to a human reviewer.
import { createGuard } from '@authensor/sdk';
import policy from './policy.yaml';
const guard = createGuard({ policy });
// In your agent's tool execution loop:
const decision = guard('shell.execute', { command: 'ls -la' });
// decision.action === 'allow'
const blocked = guard('shell.execute', { command: 'rm -rf /' });
// blocked.action === 'block'
// blocked.reason === 'Destructive shell commands are blocked'
Add Aegis to scan tool arguments for prompt injection, credential exposure, and PII:
import { createGuard } from '@authensor/sdk';
const guard = createGuard({
policy,
aegis: { enabled: true }
});
Add Sentinel to track behavioral patterns and detect anomalies:
const guard = createGuard({
policy,
aegis: { enabled: true },
sentinel: { enabled: true }
});
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides