[German]Tenable recently revealed something very unpleasant. Google's Gemini Suite contained a vulnerability dubbed "Gemini Trifecta," which was uncovered by Tenable security researchers. This marks a decisive turning point in AI security: cybercriminals could not only have attacked Gemini, but also exploited it for attacks, according to the report.
What Tenable sent me in a message the other day is quite something. According to their own statements, their security researchers discovered three vulnerabilities in Google's Gemini Suite and named them "Gemini Trifecta." These vulnerabilities have now been fixed, but they would have exposed users to significant privacy risks. Attackers could have manipulated Gemini's behavior and accessed sensitive data such as location information or stored user data ("Memories") without being noticed.
The Gemini Trifecta vulnerabilities
The "Gemini Trifecta" vulnerabilities affected three central components of the Gemini Suite, exposing users in different but equally dangerous ways:
- Gemini Cloud Assist: Attackers were able to inject manipulated logs. The system would then have unintentionally followed malicious instructions during user interaction.
- Gemini Search Personalization Model: Attackers could inject queries into a victim's browsing history. Gemini would then treat these as trusted context, which could have led to the leakage of sensitive information such as stored user data and location details.
- Gemini Browsing Tool: Attackers could trick Gemini into sending covert outgoing queries that could have sent sensitive user data directly to a server they controlled.
Taken together, these vulnerabilities opened the door to attackers on Gemini: they could have manipulated the AI's behavior and accessed sensitive data without those affected ever noticing, writes Tenable. The "Gemini Trifecta" showed that attackers did not need direct access, malware, or phishing emails: instead, Gemini itself could be exploited – and that has far-reaching consequences for users and companies that rely on AI-powered tools.
Tenable sees a core problem
According to Tenable Research, the core problem was that Gemini integrations did not clearly distinguish between legitimate user input and content injected by attackers. As a result, manipulated logs, fake search histories, or hidden web content could be treated as trustworthy and become gateways for standard features.
"Gemini's strength lies in putting logs, search queries, and browser activity into context. However, it is precisely this ability that becomes a risk when attackers manipulate the input," explains Liv Matan, Senior Security Researcher at Tenable.
"The 'Gemini Trifecta' illustrates how AI platforms can be manipulated unnoticed and data theft can be concealed – adding a whole new dimension to the security challenges facing businesses. Like any powerful technology, large language models (LLMs) such as Gemini offer enormous benefits, but they are also vulnerable to weaknesses. Security teams must take decisive action and close vulnerabilities before attackers can exploit them – in other words, take a proactive rather than reactive approach to protecting AI environments. It's not just about fixing vulnerabilities, but redefining AI security at a time when the platforms themselves can become gateways."
The potential impact
The potential impact of the "Gemini Trifecta" vulnerabilities is frightening to me and reinforces my warning against the use of AI models in companies until the risks have been clearly identified and eliminated. If attackers had succeeded in exploiting the "Gemini Trifecta" vulnerability, they would have been able to:
- Embed malicious instructions in logs or search histories without being noticed.
- Access sensitive information such as stored user data and location histories.
- Be able to misuse cloud integrations to access other cloud resources.
- Be able to get Gemini to forward user data to a server they control via the browsing tool.
It would have been like opening Pandora's box, and attackers would have been able to help themselves to the victim's knowledge.
Recommendations for security teams
Although Google has fixed all three vulnerabilities, Tenable recommends that security teams:
- Consider AI-powered features as active attack surfaces rather than passive tools.
- Regularly check logs, search histories, and integrations for manipulation.
- Monitor unusual tool executions or outgoing requests that could indicate data leakage.
- Test AI-powered services for resilience against prompt injection and proactively strengthen protective measures.
The complete Tenable report with details may be read here.