Lakera Launches Open-Source Security Benchmark for LLM Backends in AI Agents
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20251028168283/en/
The b3 is built around a new idea called threat snapshots. Instead of simulating an entire AI agent from start to finish, threat snapshots zoom in on the critical points where vulnerabilities in large language models are most likely to appear. By testing models at these exact moments, developers and model providers can see how well their systems stand up to more realistic adversarial challenges without the complexity and overhead of modeling a full agent workflow.
“We built the b3 benchmark because today’s AI agents are only as secure as the LLMs that power them,” said
The benchmark combines 10 representative agent “threat snapshots” with a high-quality dataset of 19,433 crowdsourced adversarial attacks collected via the gamified red teaming game, Gandalf: Agent Breaker. It evaluates susceptibility to attacks such as system prompt exfiltration, phishing link insertion, malicious code injection, denial-of-service, and unauthorized tool calls.
Initial results from testing 31 popular LLMs reveal several key insights:
- Enhanced reasoning capabilities significantly improve security.
- Model size does not correlate with security performance.
- Closed-source models generally outperform open-weight models — though top open models are narrowing the gap.
The is now available under an open-source license at https://arxiv.org/abs/2510.22620.1
Gandalf: Agent Breaker is a hacking simulator game that challenges you to break and exploit AI agents in realistic scenarios and the ten GenAI applications inside the game simulate how a real-world AI agent behaves. Each application features multiple difficulty levels, layered defenses, and novel attack surfaces designed to challenge a range of skill sets, from prompt engineering to red teaming. Some of the apps are chat-based, while others rely on code-level thinking, file processing, memory, or external tool usage.
The initial version of Gandalf was born out of an internal hackathon at Lakera, where blue and red teams tried to build the strongest defenses and attacks for an LLM holding a secret password. Since its release in 2023 it has become the world’s largest red teaming community, generating more than 80 million data points. Initially created as a fun game, Gandalf exposes the real-world vulnerabilities in GenAI applications to raise awareness about the importance of AI-first security.
About Lakera
Lakera, a Check Point company, is a world leading AI-native security platform for Agentic AI applications, protecting Fortune 500 enterprises and leading technology companies from emerging AI cyber risks. Lakera’s defenses evolve in real-time thanks to Gandalf, the world’s largest red teaming community, and their proprietary AI. Lakera was founded by
About
Legal Notice Regarding Forward-Looking Statements
This press release contains forward-looking statements. Forward-looking statements generally relate to future events or our future financial or operating performance. Forward-looking statements in this press release include, but are not limited to, statements related to our expectations regarding our products and solutions and Lakera’s products and solutions, our ability to leverage Lakera’s capabilities and integrate them into Check Point, our ability to deliver end-to-end AI security stack, our foundation of the new Check Point’s
|
____________________ |
|
1 We believe the security benefits of enabling widespread defensive improvements substantially outweigh the risks of potential misuse. That said, we only plan to publish the lower quality version of the attacks for which the most effective attacks have been removed. Prior to release, we are contacting all affected LLM providers and giving them the option of patching their models before releasing the data. |
View source version on businesswire.com: https://www.businesswire.com/news/home/20251028168283/en/
Media Contacts:
Head of Communications
press@lakera.ai
Source: Lakera