A new disclosure from security researchers has revealed how a zero-click flaw inside Google’s enterprise AI suite exposed sensitive corporate data. The issue, now known as the GeminiJack vulnerability, showed that attackers could plant hidden instructions in everyday Workspace files and trigger silent data extraction during normal AI-assisted searches. The discovery highlights an emerging class of risks created by AI systems with deep access to organisational information.
What Researchers Discovered
Security analysts uncovered a zero-click data exposure path inside Gemini Enterprise, Google’s AI service for Workspace environments. The issue stemmed from the way the platform combined user queries with retrieved documents, emails, and calendar entries. Attackers could plant hidden instructions inside otherwise harmless files. When the AI processed those files during routine searches, it followed the embedded commands without raising alerts.
This approach required no clicks, downloads, or visible user actions. A poisoned document sitting inside a shared drive was enough to trigger the sequence the moment someone used Gemini Enterprise to run a related query.
How the GeminiJack Vulnerability Worked
Hidden Prompt Injection Through Everyday Files
Attackers prepared a Google Doc, email, or calendar item containing concealed directives. These directives did not need to appear in visible text. They could be embedded in formatting, metadata, or sections users rarely inspect.
Routine AI Retrieval Triggered the Attack
When an employee used Gemini Enterprise to gather information, the system automatically included relevant Workspace materials. The poisoned file slipped into the retrieval chain and merged with the prompt. This action gave the attacker’s instructions the same weight as legitimate context.
AI-Driven Data Extraction
Once activated, the embedded directives told the AI to search across Workspace materials. The instructions pushed Gemini Enterprise to assemble internal emails, calendar items, confidential files, and various corporate assets. The platform gathered the information with the same privileges granted to the authorized user.
Silent Exfiltration Through External Requests
The final step used a subtle technique. The AI placed extracted data inside a constructed image URL. When the user viewed the AI response, the browser requested the external resource. That remote request transmitted the encoded data to the attacker’s server. Traditional security tools rarely inspect this channel, and nothing in the user interface suggested malicious activity.
What Is GeminiJack’s Vulnerability Impact
The flaw did not rely on malware, phishing, or credential theft. It exploited trust placed in AI systems with broad internal access. Organisations adopted these platforms to improve productivity, yet those same permissions allowed attackers to extract sensitive information without detection.
The vulnerability also revealed the limitations of conventional monitoring. Standard endpoint tools did not trigger alerts because the process ran inside normal AI operations. Data loss prevention systems struggled to detect exfiltration hidden in image requests. The attack demonstrated how AI-native threats can bypass controls designed for older security models.
Google’s Mitigation Efforts
Google worked with the reporting researchers to patch affected components. The company adjusted how Gemini Enterprise handles retrieved content and introduced additional safeguards for cross-service access. Updated controls reduce the chance that untrusted files can influence AI behavior. Google described the changes as both a direct fix and a broader step toward securing AI-integrated workflows.
Broader Implications for Enterprise Security
AI systems have become central to modern productivity. They handle sensitive files, integrate with communication tools, and automate routine tasks. This level of access creates new attack surfaces that traditional defensive models do not fully address.
Security teams now face threats that emerge from manipulated documents, invisible prompt injections, and AI behavior mismatch. The GeminiJack vulnerability shows that organisations must treat AI services as privileged systems. They require independent auditing, strict permissions management, and monitoring designed for AI-specific risks.
Final Thoughts
The GeminiJack vulnerability exposed how enterprise AI platforms can inadvertently enable high-impact data breaches. The incident highlighted an emerging challenge: attackers no longer need to compromise endpoints when they can influence AI systems with crafted content. Google addressed the flaw, yet the broader lesson remains. Organisations must secure AI processes with the same rigor applied to critical infrastructure, or similar incidents will continue to surface.