This quickstart gets you enforcing safety policies on agent tool calls as fast as possible. No control plane, no database, no infrastructure. Just the SDK and a policy file.
pnpm add @authensor/sdk
Save this as policy.yaml:
version: "1"
rules:
- tool: "file.read"
action: allow
- tool: "file.write"
action: escalate
reason: "File writes need approval"
- tool: "shell.*"
action: block
reason: "Shell access is disabled"
import { createGuard } from '@authensor/sdk';
const guard = createGuard({ policyPath: './policy.yaml' });
// This goes through
const read = guard('file.read', { path: '/tmp/data.json' });
console.log(read.action); // 'allow'
// This gets escalated
const write = guard('file.write', { path: '/etc/config', content: '...' });
console.log(write.action); // 'escalate'
console.log(write.reason); // 'File writes need approval'
// This gets blocked
const shell = guard('shell.execute', { command: 'whoami' });
console.log(shell.action); // 'block'
console.log(shell.reason); // 'Shell access is disabled'
Install Aegis and enable it in your guard:
pnpm add @authensor/aegis
const guard = createGuard({
policyPath: './policy.yaml',
aegis: { enabled: true }
});
Now every tool call is also scanned for prompt injection patterns, exposed credentials, and PII before the policy engine evaluates it. If Aegis detects a threat, the action is blocked regardless of what the policy says.
Every guard call generates a receipt. Receipts form a hash chain where each receipt includes the hash of the previous one:
const decision = guard('file.read', { path: '/tmp/data.json' });
console.log(decision.receipt);
// {
// id: "rec_abc123",
// tool: "file.read",
// action: "allow",
// timestamp: "2026-01-15T10:30:00Z",
// hash: "sha256:...",
// previousHash: "sha256:..."
// }
escalate decisions with human-in-the-loop reviewExplore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides