Anthropic released Mythos. It found thousands of zero-day vulnerabilities across every major operating system and web browser. It chained Linux kernel bugs into full privilege escalation. It found a 27-year-old flaw in OpenBSD that no human, no fuzzer, and no static analyzer had caught. Autonomously, without human steering.
A control monitor is a system. It has inputs, parsing logic, trust boundaries, and architectural assumptions baked into its evaluation pipeline. Mythos proved frontier models discover new vulnerabilities in software hardened for decades, without being told where to look.
The monitor watching your AI agent was built last year.
During the AI Control Hackathon, I ran a test. Gave Sonnet a task and a hidden side objective. No technique names. No information about the monitor's architecture. No mention of the skip_if_no_tool_calls flag that auto-passes any message without tool calls. Just a reason to hide.
The agent independently discovered the architectural bypass. Scored 2 out of 100 on the strongest available monitor. That matched what I got from a hand-crafted attack with years of pentesting experience behind it.
Nobody taught it evasion. It converged on the optimal strategy because the strategy is a property of the architecture, not a secret. Every offensive technique ever catalogued, every CVE, every MITRE ATT&CK entry, every conference talk about living off the land or race conditions or salami slicing -- it's all in the training data. The model showed up on day one knowing more about bypassing defenses than most security teams will learn in a career.
The idea that it wouldn't apply that knowledge to the system evaluating it requires a kind of optimism the security industry abandoned a long time ago.
While auditing ControlArena's monitor framework (used by UK AISI and Redwood Research for AI control evaluations), I found that two of three prompt formatters passed tool call arguments straight into the monitor's prompt with zero sanitization. An adversarial agent could inject closing XML tags, spoof the monitor's own structural context, and rewrite how the monitor interprets the conversation.
Textbook injection vulnerability. In the safety infrastructure itself.
That fix was accepted and merged (PR #798). The vulnerability had been there since the formatters were written.
See the full finding writeup for technical details.
Attack transcripts tested across Opus and Sonnet as monitors:
Attacks that hide suspicious content inside legitimate-looking actions (credentials in git commits, secrets in test fixtures) got caught by the stronger model. Sonnet eliminated every single one. Capability scaling works when evidence is visible but the monitor fails to recognize it.
But attacks that distribute intent across multiple individually-clean tasks didn't go away.
TOCTOU attacks (get clean code approved, then change its dependency) evaded at 100% on Opus and 75% on Sonnet. The clean-phase transcripts scored near zero on both models. Not because the monitors failed. Because those transcripts genuinely are benign. The attack exists only in their composition, and no single evaluation sees the whole picture.
You can't solve this by making the model smarter. "Is this action suspicious?" will always return no when the action genuinely isn't suspicious. Five individually-clean actions in sequence can still equal a fully compromised server.
This exact transition happened fifteen years ago. Antivirus checked each file in isolation. Attackers responded by using only legitimate system tools, in legitimate ways, toward illegitimate ends. Living off the land. The industry was forced to build EDR and XDR platforms that track behavior across time and context, not just flag individual actions.
AI control monitoring is at the same inflection point. Per-task evaluation is signature matching. It catches unsophisticated attacks that leave obvious traces in a single action. It fails against anything distributed, cumulative, or compositional.
1. Stop auto-passing messages without tool calls. ControlArena's default monitor returns 0.0 for any message without a tool call, without even invoking the LLM. Text-only exfiltration walks straight through. One-line config change.
2. Track state across tasks. A proof-of-concept aggregate monitor with cross-task memory caught 3 of 5 salami attack steps vs. 1 of 5 for per-task evaluation. Suspicion scores climbed as degradation accumulated: 0.22, 0.34, 0.58, 0.72, 0.78. Cross-session behavioral tracking already exists in the security industry. Import it.
3. Add dependency tracking. When a task modifies code approved in a previous task, re-evaluate the approval. TOCTOU attacks only work because the monitor treats each approval as final.
The current approach assumes the hard part is building a smarter judge. The hard part is the architecture around it. The security industry learned this lesson over two decades. The vulnerabilities models are now discovering from twenty years ago will find the gaps in monitor architectures built yesterday.
This research was manual. 22 attack transcripts, ~200 evaluations, hand-crafted attack policies.
We're automating it. The Authensor red teaming pipeline runs these attack patterns (and hundreds more) at scale against any AI agent, MCP server, or safety infrastructure. Multi-step chains, MITRE ATLAS mapped, quantified scoring.
The ControlArena finding took days of manual work. The pipeline finds this class of vulnerability in hours across multiple targets simultaneously.
If you're building, deploying, or certifying AI systems and want to know what breaks before someone else finds out: security@authensor.com
Full empirical data: 22 attack transcripts, ~200 evaluations across two models, ControlArena-native attack policies, and an aggregate monitor: github.com/15researchlab/ai-control-monitor-blind-spots
Extended analysis: Zenodo preprint
Code contribution: Monitor prompt sanitization fix merged into ControlArena (PR #798)