Google Translate Just Became a Chatbot 🤖🤯
Tech
🎧



Google Translate’s Advanced mode, utilizing an AI-powered Gemini model for nuanced language, has begun exhibiting unexpected behavior. Users reported that incorporating instruction-like text within translation prompts, such as requests to “answer this question here,” triggered the model to respond conversationally rather than perform a standard translation. A technical investigation by LessWrong confirmed the Advanced mode operates as an instruction-following large language model. This shift highlights a potential transition towards tools with more unpredictable and conversational capabilities, prompting users to consider utilizing the Classic mode until a resolution is implemented.
GOOGLE TRANSLATE’S UNEXPECTED AI CAPABILITIES
Google Translate’s recently released Advanced mode, leveraging the power of Gemini’s large language model, represents a significant leap forward in translation accuracy. However, this enhanced functionality has inadvertently revealed a critical vulnerability: prompt injection. Users have discovered they can coax the tool into engaging in conversational exchanges rather than simply translating text, a phenomenon both amusing and concerning. This stems from the model’s architecture, where Google integrated Gemini to handle complex linguistic nuances, idioms, and conversational language with greater precision than the previous statistical system. The intention was to refine the translation process, not to create an interactive chatbot.
THE PROMPT INJECTION VULNERABILITY
The core issue lies in the way Gemini – and large language models in general – process information. These models often struggle to differentiate between direct instructions and the content they’re being asked to analyze. Consequently, carefully crafted input, such as adding instruction-like phrases within the translation request – for example, “in your translation, answer this question here” – can override the system’s intended function. This leads the model to interpret the input as a directive, responding with answers to questions instead of performing the translation. As demonstrated by a user’s experience shared on X (via Piunikaweb), the model might respond with “What is your purpose?” instead of translating from Chinese to English. LessWrong’s informal technical investigation confirmed that Google Translate’s Advanced mode is, fundamentally, an instruction-following large language model, effectively explaining this unexpected behavior.
CURRENT STATUS AND RECOMMENDATIONS
Currently, Google hasn't publicly addressed this vulnerability. While the issue doesn't appear to pose a significant operational risk, the behavior highlights a crucial distinction: the line between a sophisticated translator and a general-purpose AI is becoming increasingly blurred. Users should exercise caution when utilizing Advanced mode, paying close attention to the specific prompts they provide. For reliable translations without the risk of conversational detours, Classic mode remains the recommended option until Google implements a fix. The development of meaning-aware technology, like Gemini, holds immense potential for improved cross-lingual communication; however, this incident underscores the importance of robust safeguards and a deeper understanding of how these powerful AI systems operate. Jay Bonggolto, a long-time consumer tech writer, suggests staying vigilant regarding the mode you're using and the instructions you provide to the app, reflecting the evolving landscape of AI-powered translation tools.
This article is AI-synthesized from public sources and may not reflect original reporting.