← Back to Learn
sdktutorialdeploymentguardrails

Setting up Authensor with OpenAI Agents SDK

Authensor

The OpenAI Agents SDK provides a framework for building agents that use function calling. The Authensor OpenAI adapter intercepts function calls before execution, evaluates them against your policy, and logs receipts for every decision.

Install

pnpm add @authensor/openai

Wrap function tools

import { withAuthensor } from '@authensor/openai';
import OpenAI from 'openai';

const tools = withAuthensor([
  {
    type: 'function',
    function: {
      name: 'send_email',
      description: 'Send an email',
      parameters: {
        type: 'object',
        properties: {
          to: { type: 'string' },
          subject: { type: 'string' },
          body: { type: 'string' },
        },
      },
    },
    execute: async (args) => await sendEmail(args),
  }
], {
  policyPath: './policy.yaml',
  aegis: { enabled: true },
});

How it works

When the OpenAI model generates a function call:

  1. The adapter intercepts the call before your execute function runs
  2. Aegis scans the function arguments for prompt injection and content threats
  3. The policy engine evaluates the call against your YAML rules
  4. If allowed, your execute function runs normally
  5. If blocked, the adapter returns an error message to the model
  6. If escalated, the call is held for human approval
  7. A receipt is logged regardless of the outcome

Integration with the Agents SDK runner

import { Agent, Runner } from 'openai-agents';
import { createAuthensorGuardrail } from '@authensor/openai';

const guardrail = createAuthensorGuardrail({
  policyPath: './policy.yaml',
  aegis: { enabled: true },
});

const agent = new Agent({
  name: 'assistant',
  model: 'gpt-4o',
  tools: [sendEmailTool, searchTool],
  guardrails: [guardrail],
});

const result = await Runner.run(agent, { input: userMessage });

Blocked action behavior

When the policy blocks a function call, the model receives a response like:

Action blocked by safety policy: "External emails require human approval"

The model can then inform the user or try an alternative approach. This keeps the agent in the loop about policy constraints without giving it the ability to override them.

Monitoring

Connect Sentinel to track patterns across your OpenAI agent's function calls:

const guardrail = createAuthensorGuardrail({
  policyPath: './policy.yaml',
  aegis: { enabled: true },
  sentinel: { enabled: true },
});

Sentinel tracks metrics like denied actions per minute, tool usage distribution, and anomalous argument patterns. If your agent starts behaving differently from its baseline, Sentinel flags it.

Keep learning

Explore more guides on AI agent safety, prompt injection, and building secure systems.

View All Guides