Truffle Security discovered 2,863 live Google Cloud API keys sitting openly on the public internet — including on a website linked to Google itself.
The keys carry the prefix “AIza” and were originally deployed as billing identifiers, embedded in client-side JavaScript to power services like embedded maps. According to the report, they were never intended to authenticate AI workloads.
The exposure stems from how Google Cloud handles API enablement. When a developer activates the Gemini API on a project, every existing API key in that project automatically inherits access to Gemini endpoints — with no warning issued and no notice sent.
Security researcher Joe Leon described the consequence directly: “With a valid key, an attacker can access uploaded files, cached data, and charge LLM-usage to your account.” The keys, he added, “now also authenticate to Gemini even though they were never intended for it.”
What Attackers Can Do With a Scraped Key
Anyone who scrapes public websites can collect these keys and use them to hit sensitive endpoints, including /files and /cachedContents, make Gemini API calls, and accumulate charges on the victim’s account.
The default configuration compounds the problem. New API keys created in Google Cloud are set to “Unrestricted” by default, meaning they apply to every enabled API in the project — including Gemini.
The financial exposure is not theoretical. A Reddit user recently claimed a stolen key generated $82,314.44 in charges between February 11 and 12, 2026, against a regular monthly spend of $180.
Broader Exposure Across Mobile Apps
Mobile security firm Quokka published a parallel finding: over 35,000 unique Google API keys embedded across a scan of 250,000 Android apps.
The firm noted that even without direct customer data exposure, the combination of inference access, quota consumption, and possible integration with broader Google Cloud resources “creates a risk profile that is materially different from the original billing-identifier model developers relied upon.”
Google has acknowledged the findings. “We have already implemented proactive measures to detect and block leaked API keys that attempt to access the Gemini API,” a spokesperson said. Whether any keys were exploited before disclosure remains unknown.
Truffle Security advises affected developers to audit which AI-related APIs are enabled on their projects and rotate any keys that are publicly accessible — starting with the oldest keys first, which are most likely to have accumulated permissions retroactively.
Security strategist Tim Erlin of Wallarm framed the issue plainly: “This is a great example of how risk is dynamic, and how APIs can be over-permissioned after the fact. Security testing, vulnerability scanning, and other assessments must be continuous.”
This article is a curated summary based on third-party sources. Source: Read the original article