LangChain is one of the most popular frameworks for building AI agents. The Authensor LangChain adapter wraps LangChain's tool execution with policy enforcement, content scanning, and audit logging. It works with both LangChain and LangGraph agents.
pnpm add @authensor/langchain
This package depends on @authensor/sdk and langchain as peer dependencies.
The adapter provides a withAuthensor function that wraps LangChain tools:
import { withAuthensor } from '@authensor/langchain';
import { DynamicTool } from 'langchain/tools';
const searchTool = new DynamicTool({
name: 'web_search',
description: 'Search the web',
func: async (query) => await search(query),
});
const safeTool = withAuthensor(searchTool, {
policyPath: './policy.yaml',
aegis: { enabled: true },
});
The wrapped tool has the same interface as the original. LangChain does not know the safety layer exists. When the agent calls the tool, Authensor evaluates the call before it reaches the original function.
For LangGraph agents, wrap the entire tool node:
import { createAuthensorToolNode } from '@authensor/langchain';
const toolNode = createAuthensorToolNode({
tools: [searchTool, calculatorTool, emailTool],
policyPath: './policy.yaml',
aegis: { enabled: true },
sentinel: { enabled: true },
});
const graph = new StateGraph({ channels })
.addNode('agent', agentNode)
.addNode('tools', toolNode)
.addEdge('agent', 'tools')
.addEdge('tools', 'agent')
.compile();
When a tool call is escalated, the adapter returns a message to the agent explaining that the action requires approval:
const safeTool = withAuthensor(emailTool, {
policyPath: './policy.yaml',
onEscalate: async (decision) => {
const approval = await waitForApproval(decision.receipt.id);
return approval.granted;
},
});
If you do not provide an onEscalate handler, escalated actions are treated as blocked and the agent receives an error message.
Every tool call generates a receipt. The adapter attaches receipt IDs to LangChain's run metadata, so you can correlate safety decisions with LangSmith traces:
const safeTool = withAuthensor(searchTool, {
policyPath: './policy.yaml',
metadata: { langsmith: true },
});
This makes it possible to see, in a single trace, what the agent tried to do, what the safety layer decided, and what actually executed.
Explore more guides on AI agent safety, prompt injection, and building secure systems.
View All Guides