The OpenAI Agents SDK is a significant step forward for building production agents. Handoffs between agents, built-in tracing, structured tool definitions — the SDK removes a lot of boilerplate. What it does not remove is the fundamental problem that has existed since the first agent frameworks shipped: there is no built-in mechanism to define what an agent is allowed to do, no approval workflow for high-risk actions, and no audit trail that would satisfy a compliance review.
You are responsible for all of that. The question is whether you build it into each agent individually or put it in one place that covers everything.
The OpenAI Agents SDK lets you define tools as functions and attach them to an agent:
import { Agent, tool, run } from "@openai/agents";
import { z } from "zod";
const transferFunds = tool({
name: "transfer_funds",
description: "Transfer funds between accounts",
parameters: z.object({
fromAccount: z.string(),
toAccount: z.string(),
amount: z.number(),
}),
execute: async ({ fromAccount, toAccount, amount }) => {
// executes transfer
return { transactionId: "txn_123", status: "completed" };
},
});
const financeAgent = new Agent({
name: "finance-agent",
model: "gpt-4o",
tools: [transferFunds],
});
await run(financeAgent, "Transfer $500 from checking to savings");
Clean, readable code. The agent can now transfer any amount between any accounts, with no restrictions, no approval step, and no record of what it did. That is fine for a demo. It is not fine for production.
npm install @authensor/sdk @authensor/openai
export AUTHENSOR_API_KEY=your_api_key
Create a policy.yaml. This policy applies to your finance agent specifically.
version: "1"
default: deny
rules:
# Small transfers are auto-approved
- action: transfer_funds
principal: "agent:finance-agent"
effect: allow
conditions:
- field: "args.amount"
operator: lte
value: 500
# Large transfers require human sign-off
- action: transfer_funds
principal: "agent:finance-agent"
effect: review
conditions:
- field: "args.amount"
operator: gt
value: 500
approvalTimeout: "30m"
reason: "Transfers over $500 require human approval"
# Block cross-account transfers to external accounts
- action: transfer_funds
principal: "agent:finance-agent"
effect: deny
conditions:
- field: "args.toAccount"
operator: notMatches
value: "^(checking|savings|money-market)-"
- action: read_balance
principal: "agent:finance-agent"
effect: allow
- action: "*"
principal: "*"
effect: deny
npx authensor policy push policy.yaml
import { Agent, run } from "@openai/agents";
import { withAuthensor } from "@authensor/openai";
import { z } from "zod";
// Your tools stay exactly as they are
const transferFunds = tool({
name: "transfer_funds",
description: "Transfer funds between accounts",
parameters: z.object({
fromAccount: z.string(),
toAccount: z.string(),
amount: z.number(),
}),
execute: async ({ fromAccount, toAccount, amount }) => {
return { transactionId: "txn_123", status: "completed" };
},
});
const financeAgent = new Agent({
name: "finance-agent",
model: "gpt-4o",
tools: [transferFunds],
});
// Wrap the agent — this is the only change
const guardedAgent = withAuthensor(financeAgent, {
principal: "agent:finance-agent",
apiKey: process.env.AUTHENSOR_API_KEY,
});
// Run it the same way
await run(guardedAgent, "Transfer $1200 from checking to savings");
The withAuthensor wrapper intercepts every tool call the SDK would make. Before the execute function runs, the intent is sent to the Authensor control plane for evaluation. The tool never fires unless the policy says it can.
ALLOW — The tool call executes normally. An immutable receipt is written capturing the intent, the principal, the matched policy rule, and the outcome.
DENY — The tool function is never called. The agent receives a structured error: "Action 'transfer_funds' denied by policy: external accounts not permitted". The denial is recorded. The agent can reason about the error and respond accordingly — usually by explaining to the user that it cannot perform the action.
REVIEW — The intent is held in the approval queue. The agent is suspended. A notification goes to your configured approval channel (webhook, email, Slack). The reviewer sees the full intent: which agent, which tool, which arguments, the policy rule that triggered the review. They approve or reject. The result is returned to the agent, which continues from where it paused.
The OpenAI Agents SDK supports agent-to-agent handoffs. An orchestrator agent calls a subagent with a specific task. Authensor covers this too. Each agent in the chain has its own principal identifier, and each tool call is evaluated against the policy for that specific principal.
rules:
# Orchestrator can delegate to subagents
- action: handoff
principal: "agent:orchestrator"
effect: allow
conditions:
- field: "args.targetAgent"
operator: in
value: ["agent:research", "agent:writer", "agent:reviewer"]
# But the orchestrator cannot directly execute destructive tools
- action: "delete_*"
principal: "agent:orchestrator"
effect: deny
# Only the specifically authorized subagent can
- action: "delete_draft"
principal: "agent:writer"
effect: allow
This prevents a compromised orchestrator from escalating privileges by issuing tool calls it should not be able to make. Each agent's authority is scoped to what it actually needs.
Every authorization decision produces a receipt. Receipts are hash-chained: each receipt contains a hash of the previous receipt, making it tamper-evident. You can query them through the control plane API or pull them in bulk for compliance exports.
import { AuthensorClient } from "@authensor/sdk";
const client = new AuthensorClient({
apiKey: process.env.AUTHENSOR_API_KEY,
});
// Get all decisions for a specific agent over the past 24 hours
const receipts = await client.receipts.list({
principal: "agent:finance-agent",
since: new Date(Date.now() - 24 * 60 * 60 * 1000),
});
for (const receipt of receipts) {
console.log({
action: receipt.intent.action,
decision: receipt.decision,
matchedRule: receipt.matchedRule,
timestamp: receipt.createdAt,
args: receipt.intent.args,
});
}
If a compliance auditor asks for every fund transfer the agent executed in Q1, this is a two-line query. If they ask why a specific transfer was approved, the receipt includes the exact policy rule that matched, the policy version that was active, and the full input that was evaluated.
The @authensor/openai adapter is open source. Run npx create-authensor to get a working example project with a policy, an OpenAI agent, and receipt storage configured. Full documentation at authensor.com/docs. The source is on GitHub — star it if it is useful, open an issue if something is missing.