← Back to Build Log
securityred-teamingfindings

168 Repositories. 350+ Vulnerabilities. What We Found Auditing the AI/ML Ecosystem.

We built an automated adversarial analysis pipeline and pointed it at the AI/ML ecosystem.

Not a sample. Not a handful of popular repos. 168 repositories across 50+ organizations, covering the full stack: training frameworks, inference servers, agent toolkits, model format libraries, safety evaluation tools, and production ML infrastructure.

The results were worse than we expected.

By the numbers

  • 168+ repositories audited
  • 350+ verified vulnerabilities identified
  • 126 responsible disclosure reports prepared
  • 2 novel vulnerability classes discovered
  • 50+ organizations affected
  • Coordinated disclosure in progress

Every finding has reproduction steps. Every disclosure report follows responsible disclosure conventions. We are not publishing specifics until affected organizations have had time to patch.

What we looked for

The pipeline covers six primary vulnerability categories:

  • Deserialization attacks -- unsafe loading of serialized objects, pickle exploitation, model format abuse
  • Injection vulnerabilities -- prompt injection, SQL injection, command injection, template injection
  • Authentication and authorization bypass -- missing auth checks, privilege escalation, token handling flaws
  • Code execution -- sandbox escapes, arbitrary code execution through intended functionality, eval/exec abuse
  • Supply chain risks -- dependency confusion, typosquatting exposure, unsigned artifacts
  • Model format exploits -- abusing the trust boundary between "safe" and "unsafe" model serialization formats

We are deliberately not disclosing which specific scanners, rules, or detection techniques the pipeline uses. The methodology is proprietary.

Two novel vulnerability classes

We discovered two vulnerability classes that, to our knowledge, have not been previously documented:

The first involves bypassing a model serialization format that was specifically designed to be safe. The format was created as an industry response to known deserialization risks, and the ecosystem is actively migrating toward it. We found that the safety guarantee can be circumvented. Details will be published after coordinated disclosure.

The second involves escaping a sandboxed code execution environment through the semantics of libraries that the sandbox explicitly allows. The sandbox correctly restricts dangerous functions, but certain combinations of permitted operations can be chained to achieve arbitrary attribute access. Details will be published after coordinated disclosure.

Both have been reported to the affected maintainers.

The evaluators have the same bugs

The finding that concerns us most isn't any single vulnerability. It's the pattern.

The AI safety evaluation tools -- the systems designed to test whether AI is safe to deploy -- contain the same vulnerability classes they are supposed to detect. Safety scanners with deserialization flaws. Red teaming tools with injection vulnerabilities. Guardrail systems that can be bypassed through their own dependencies.

This is not a theoretical concern. We found concrete, reproducible vulnerabilities in production safety tools that organizations rely on for deployment decisions. If the tools certifying that a system is safe are themselves compromised, the certification is meaningless.

We have prior work in this area. In April 2026, we identified and fixed a prompt injection vulnerability in UK AISI's ControlArena monitor framework (PR #798), where adversarial agents could inject into their own safety evaluation. The pattern is broader than one framework.

Why we are publishing this now

Coordinated disclosure is in progress. We are not naming specific products, organizations, or vulnerability details until maintainers have had adequate time to respond and patch.

We are publishing the aggregate numbers and general categories because:

  1. The scale matters. 350+ vulnerabilities across 168 repos is not a statistical anomaly. It indicates systemic issues in how AI/ML software handles trust boundaries.
  2. The safety tool findings matter more. If the evaluation layer is compromised, nothing downstream can be trusted. This needs attention from the research community and from organizations making deployment decisions based on these tools.
  3. We want to be transparent about our methodology. We ran the pipeline, we found what we found, and we are following responsible disclosure practices. The specifics will come when the time is right.

What happens next

We are working through 126 disclosure reports. Some organizations have already responded. Others have not. We will publish detailed findings -- including the two novel vulnerability classes -- as patches are deployed and disclosure timelines are met.

If you are a maintainer of an AI/ML project and want to know whether your repository was included in this audit, contact us at john@authensor.com.

If you want this pipeline pointed at your own infrastructure before someone else finds what we would find, that is what Authensor's red teaming services exist for.


John Kearney is the founder of Authensor and 15 Research Lab. He builds automated adversarial analysis tools for AI/ML infrastructure and contributes to AI safety evaluation frameworks including UK AISI's ControlArena.