Google’s Gemini “Nano Banana” image generator has become the latest viral sensation (and source of alarm), after users reported unexpected personal details appearing in AI-edited portraits. The feature, which transforms selfies into stylised saree images against vintage backdrops, first gained popularity for its creative flair. But an Instagram video showing a mole on the subject’s arm, which was absent in her original photo, has raised fresh privacy concerns.
The trend’s rapid spread led many users to ask how the AI model accessed physical attributes not visible in their uploaded pictures. In response, commentators noted that Gemini may draw on images stored in users’ broader Google accounts or connected services. While Google has emphasised that Nano Banana only processes the image provided and does not tap personal photo libraries, the incident has prompted calls for clearer data-usage disclosures.
Google has built Nano Banana into its Gemini app with safeguards: every AI-generated image carries an invisible SynthID watermark and metadata labelling it as machine-made. Still, experts caution that watermarks alone do not fully prevent misuse or reveal how an AI system might incorporate personal data patterns.
India’s cybersecurity authorities have issued generic advisories urging caution when uploading personal images to any online platform. They recommend reviewing app permissions, avoiding highly sensitive photos, and reading privacy policies carefully. For now, Nano Banana’s blend of creativity and controversy underscores the need for greater transparency around AI image-generation services and how they handle user data.