Endor Labs launched AURI, a free security platform that embeds real-time vulnerability intelligence into AI coding assistants, on March 3. The product integrates natively with tools including Cursor, Claude, and Augment via the Model Context Protocol (MCP), and is available at no cost to individual developers.
The launch follows research from Carnegie Mellon University, Columbia University, and Johns Hopkins University published in December, which found that leading AI coding models produce functionally correct code only about 61% of the time. Of that output, just 10% is both functional and secure.
“Even though AI can now produce functionally correct code 61% of the time, only 10% of that output is both functional and secure,” said Varun Badhwar, CEO of Endor Labs. “These coding agents were trained on open source code from across the internet, so they’ve learned best practices — but they’ve also learned to replicate a lot of the same security problems of the past.”
A Structural Problem in AI-Assisted Development
The core issue is how AI coding models are built. They train on vast open-source repositories that contain not only sound engineering patterns but also well-documented vulnerabilities and insecure code that may have gone undetected for years. The models replicate both equally.
Badhwar, who previously founded RedLock before its acquisition by Palo Alto Networks, co-founded Endor Labs four years ago with Dimitri Stiliadis. The startup, which has raised more than $208 million in venture funding, originally focused on the trend of developers acting as “software assemblers,” pulling components from open source repositories rather than writing original code. The rapid rise of AI coding tools accelerated that dynamic into something far harder to secure.
New vulnerabilities surface daily in software written five, ten, even twelve years ago, and that threat intelligence does not reach the AI models generating new code today. “If you started filtering out anything that ever had a vulnerability, you’d have no code left to train on,” Badhwar noted. The result is AI tools producing insecure code at speed, while security teams running traditional scanning tools fall further behind.
How AURI Traces Risk at the Function Level
AURI’s technical foundation is what Endor Labs calls a “code context graph.” It maps how an application’s first-party code, open source dependencies, container layers, and AI models connect to one another at the function level, not just at the library level.
Tools like Snyk and GitHub’s Dependabot identify which libraries an application imports and check them against known vulnerability databases. AURI goes further, tracing exactly how and where those components are actually called, down to the specific line of code.
Badhwar offered a direct example: a developer might import a large library such as an AWS SDK but only invoke two services across ten lines of code. A traditional scanner flags the entire library if any part of it carries a known vulnerability. AURI identifies whether the vulnerable function is the one actually being used, reducing noise and helping developers act on what genuinely matters.
- 90% of development teams now use AI coding assistants
- AI models produce functional code only ~61% of the time
- Only 10% of AI-generated code is both functional and secure
- AURI is free for individual developers
- Integrates with Cursor, Claude, and Augment via MCP
The free pricing is a deliberate distribution strategy. Reaching developers at the point of code generation, before vulnerabilities move downstream into production, is the problem Endor Labs is betting AURI can solve.
Photo by M. Zakiyuddin Munziri on Unsplash
This article is a curated summary based on third-party sources. Source: Read the original article