Twenty-nine percent. That’s the share of AI agents running inside surveyed organizations without approval from IT or security teams, according to Microsoft‘s own research. Not rogue software planted by outsiders. Agents built, deployed, and forgotten by employees — operating in the dark.
The company announced the general availability of two products on March 9 designed to address exactly that gap: Agent 365, priced at $15 per user per month, and Microsoft 365 Enterprise 7, the so-called “Frontier Worker Suite,” bundled at $99 per user per month. Both go live on May 1, alongside Wave 3 of Microsoft 365 Copilot.
Agent 365 functions as what the company calls the “control plane for agents” — a centralized layer for IT, security, and business teams to observe, govern, and secure AI agents across an enterprise. The higher-priced Microsoft 365 Enterprise 7 bundles that control plane with Microsoft 365 Copilot and the company’s most advanced security stack into a single license. Wave 3 of Copilot adds model diversity from both OpenAI and Anthropic.
The backdrop to this launch is a set of numbers that paint a picture of adoption that has moved faster than oversight. More than 80 percent of Fortune 500 companies are actively using AI agents built with low-code and no-code tools, according to Microsoft‘s Cyber Pulse report published in February. IDC projects 1.3 billion agents in circulation by 2028. Within just two months of preview availability, tens of millions of agents appeared in the Agent 365 Registry. Only 47 percent of organizations use any security tools at all to protect their AI deployments.
The ‘Double Agent’ Problem
Microsoft has a name for the worst-case scenario: “double agents.” The term was first introduced in a November 2025 blog post by Microsoft security executive Charlie Bell, describing AI agents manipulated through prompt injection, model poisoning, or similar techniques into acting against the very organizations they were built to serve.
Vasu Jakkal, corporate vice president of Microsoft Security, told the announcement’s interviewers that no real-world incidents of agent compromise at scale have been observed yet. The company’s AI Red Team has, however, run extensive simulations. In those experiments, both direct and indirect prompt injections successfully manipulated agents into accessing unauthorized data.
“Just like insider risk was a big thing with employees, we need to make sure that we don’t create that with agents,” Jakkal said.
A Threat Vector Already in the Wild
The theoretical is edging toward the practical. In February, Microsoft‘s Defender Security Research Team published findings on what it labeled “AI Recommendation Poisoning” — a technique where companies embed hidden instructions inside “Summarize with AI” buttons on websites. When a user clicks one, the pre-filled prompt attempts to inject persistence commands into an AI assistant’s memory.
Microsoft is also serving as its own first customer for Agent 365, with visibility into more than 500,000 agents running across its corporate environment. The most widely used focus on research, coding, sales intelligence, customer triage, and HR self-service. Tens of thousands of external customers have already begun adopting the platform, according to Judson Althoff, CEO of Microsoft Commercial Business.
“These agents are no longer experimental,” Jakkal said. “The visibility gap creates business risk.”
Photo by Tima Miroshnichenko on Pexels
This article is a curated summary based on third-party sources. Source: Read the original article