The OpenAI Agents SDK provides a framework for building agents that use function calling. The Authensor OpenAI adapter intercepts function calls before execution, evaluates them against your policy, and logs receipts for every decision.
pnpm add @authensor/openai
import { withAuthensor } from '@authensor/openai';
import OpenAI from 'openai';
const tools = withAuthensor([
{
type: 'function',
function: {
name: 'send_email',
description: 'Send an email',
parameters: {
type: 'object',
properties: {
to: { type: 'string' },
subject: { type: 'string' },
body: { type: 'string' },
},
},
},
execute: async (args) => await sendEmail(args),
}
], {
policyPath: './policy.yaml',
aegis: { enabled: true },
});
When the OpenAI model generates a function call:
execute function runsexecute function runs normallyimport { Agent, Runner } from 'openai-agents';
import { createAuthensorGuardrail } from '@authensor/openai';
const guardrail = createAuthensorGuardrail({
policyPath: './policy.yaml',
aegis: { enabled: true },
});
const agent = new Agent({
name: 'assistant',
model: 'gpt-4o',
tools: [sendEmailTool, searchTool],
guardrails: [guardrail],
});
const result = await Runner.run(agent, { input: userMessage });
When the policy blocks a function call, the model receives a response like:
Action blocked by safety policy: "External emails require human approval"
The model can then inform the user or try an alternative approach. This keeps the agent in the loop about policy constraints without giving it the ability to override them.
Connect Sentinel to track patterns across your OpenAI agent's function calls:
const guardrail = createAuthensorGuardrail({
policyPath: './policy.yaml',
aegis: { enabled: true },
sentinel: { enabled: true },
});
Sentinel tracks metrics like denied actions per minute, tool usage distribution, and anomalous argument patterns. If your agent starts behaving differently from its baseline, Sentinel flags it.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides