Google’s Gemini AI Vulnerable to ASCII Smuggling Attack, No Patch Planned
A critical ASCII smuggling vulnerability in Google’s Gemini AI allows hidden malicious commands in text, but Google has decided against patching it, raising corporate security concerns.
- • Gemini AI is vulnerable to an ASCII smuggling attack discovered by Viktor Markopoulos from FireTail.
- • The exploit allows hidden characters in text to alter AI behavior and potentially leak data.
- • Other AI models like ChatGPT and Claude successfully block similar attacks; Gemini does not.
- • Google classifies the issue as social engineering and has chosen not to patch it, causing criticism.
Key details
Google’s Gemini AI has been found vulnerable to a new type of security exploit known as the "ASCII smuggling" attack, raising concerns about its use in corporate environments. Discovered by cybersecurity researcher Viktor Markopoulos of FireTail, this attack exploits how Gemini processes Unicode characters, allowing hidden malicious instructions to be embedded within seemingly innocuous text inputs, such as calendar invites or emails. These hidden characters can manipulate Gemini's behavior in unintended ways, potentially leading to data breaches, forging identities, or poisoning training data without clear detection.
While Gemini is uniquely susceptible to this exploit compared to other AI models—OpenAI's ChatGPT and Anthropic's Claude were able to detect or block similar inputs—Google has elected not to issue a patch. The company classifies the issue as social engineering rather than a flaw in the AI’s system design, a stance that has generated criticism from security experts and the wider AI community. The vulnerability is particularly alarming for users of Google Workspace where Gemini is integrated, as it could expose sensitive information and cause misinformation through subtle behavioral changes.
Researchers demonstrated that attacks could be carried out through common text channels, raising fears of corporate data compromise without obvious traces. FireTail highlighted that other AI models like Elon Musk’s Grok also share similar vulnerabilities, while competitors have implemented stronger defenses. Experts recommend users disconnect Gemini from critical business systems until effective mitigation measures are developed, stressing the growing need for advanced anomaly detection and tighter input sanitization protocols in AI applications.
Google had previously addressed other security concerns linked to Gemini but maintains this specific vulnerability is not a system bug. This decision underscores the ongoing tensions in distinguishing genuine system vulnerabilities from exploitable behaviors classified under social engineering, complicating defense strategies for AI deployment in sensitive environments.
As of October 9, 2025, the issue remains unpatched, exposing organizations relying on Gemini to potential risks and highlighting the complexities of securing AI platforms against novel cyber threats.