Enterprise AI’s Authorization Gap and the Identity Problem

alex2404
By
Disclosure: This website may contain affiliate links, which means I may earn a commission if you click on the link and make a purchase. I only recommend products or services that I personally use and believe will add value to my readers. Your support is appreciated!

No one knows whose identity an AI agent is acting under. That gap, according to Alex Stamos and Nancy Wang, is becoming one of the most serious structural problems in enterprise security.

Stamos, chief product officer at Corridor, says the most common behavior his company observes is developers pasting credentials directly into prompts. “The standard thing is you just go grab an API key or take your username and password and you just paste it into the prompt,” he said. “We find this all the time because we’re hooked in and grabbing the prompt.” Corridor flags it and routes the developer toward proper secrets management — but the fact that it happens constantly reveals how far current practice sits from where it needs to be.

Wang, CTO at 1Password, is watching the same pattern from the other side. Her company scans code as it is written, vaulting any plain-text credentials before they persist. The design logic is deliberate. “If it’s too hard to use, to bootstrap, to get onboarded, it’s not going to be secure because frankly people will just bypass it and not use it,” she said. Security tooling that creates friction does not get used — and unused tooling solves nothing.

The identity question no one has answered

Both executives spoke at the VB AI Impact Salon Series, where the conversation kept returning to a question that current frameworks were not built to answer. “At a high level, it’s not just who this agent belongs to or which organization this agent belongs to, but what is the authority under which this agent is acting, which then translates into authorization and access,” Wang said.

Agents, she noted, also have secrets — API keys, credentials, tokens. The same lifecycle problems that plague human access management now apply to software that can act autonomously across systems. 1Password arrived in enterprise environments because employees brought a consumer tool they already trusted to work. Wang sees the same dynamic accelerating with AI agents.

Internally, the company tracks the ratio of security incidents to AI-generated code as engineers use tools like Claude Code and Cursor. “That’s a metric we track intently to make sure we’re generating quality code,” she said.

When the agent has more access than anyone else

Spiros Xanthos, founder and CEO at Resolve AI, put the risk plainly at the same event: “An agent typically has a lot more access than any other software in your environment. So, it is understandable why security teams are very concerned about that. Because if that attack vector gets utilized, then it can both result in a data breach, but even worse, maybe you have something in there that can take action on behalf of an attacker.”

Scoping that access is where the engineering gets hard. Wang pointed to SPIFFE and SPIRE, workload identity standards built for containerized environments, as candidates being tested in agentic contexts. She acknowledged the fit is imperfect. “We’re kind of force-fitting a square peg into a round hole,” she said.

Authentication alone does not close the problem. Once a credential exists, the question becomes what the agent holding it is actually permitted to do. Wang framed the answer in terms of least privilege applied to tasks rather than roles. “You wouldn’t want to give a human a key card to an entire building that has access to every room in the building,” she said.

On the coding side, false positives from security scanners introduce a separate failure mode. Large language models, Stamos observed, are prone to agreement. “If you tell it this is a flaw, it’ll be like, yes sir, it’s a total flaw!” Getting precision right matters disproportionately in this context. “You cannot screw up and have a false positive, because if you tell it that and you’re wrong, you will completely ruin its ability to write correct code.” Corridor has engineered its scanning to run at a latency of a few hundred milliseconds per scan to make that precision viable in practice.

Photo by panumas nikhomkhai on Pexels

This article is a curated summary based on third-party sources. Source: Read the original article

Share This Article