According to Wired, users of popular AI chatbots like Google’s Gemini and OpenAI’s ChatGPT are sharing techniques to create nonconsensual “bikini deepfakes” from photos of fully clothed women. In one specific case on Reddit, a user posted a photo of a woman in a sari and requested her clothes be “removed” and replaced with a bikini, which another user fulfilled. After Wired’s inquiry, Reddit’s safety team removed the post and banned the r/ChatGPTJailbreak subreddit, which had over 200,000 followers, under its “don’t break the site” rule. The report notes that in November, Google released its Nano Banana Pro imaging model, and OpenAI responded last week with ChatGPT Images, both tools that excel at tweaking existing photos. In limited tests, Wired confirmed that using basic English prompts could transform images of clothed women into bikini deepfakes on these platforms.
The arms race is already lost
Here’s the uncomfortable truth that this report lays bare: the technical guardrails on these models are fundamentally porous. The fact that Wired’s own tests could get these results with “basic prompts written in plain English” is terrifying. It shows we’re not talking about elite hackers here. We’re talking about any motivated individual with a grudge, a creepy obsession, or just a desire to cause harm. The companies are in a constant game of whack-a-mole, releasing more powerful models like Nano Banana Pro and ChatGPT Images, which inevitably come with new, unforeseen vulnerabilities. It’s a losing battle. Every improvement in “realism” and “editing” capability is a double-edged sword that gets sharper on the malicious side.
Consent is the casualty
And that’s the core of it, isn’t it? This isn’t just a tech bug. It’s a massive, scalable violation of consent. The report mentions millions visiting “nudify” websites, which means we’re looking at an industrial-scale harassment tool. The impact on the women targeted—who could be anyone from a public figure to an ex-partner to a random person whose photo was scraped online—is profound and deeply damaging. Reddit’s policy against “nonconsensual intimate media” is good, but it’s reactive. By the time a post is reported and taken down, the harm is done, and the image is likely saved and shared elsewhere. The platforms hosting the AI tools themselves are stuck in a tough spot: how do you build a useful, creative image editor while also making it impossible to weaponize?
Where do we go from here?
So what’s the solution? I think we have to stop pretending this is a purely technical problem that better algorithms will fix. Legal frameworks are scrambling to catch up, but they’re notoriously slow. Public shaming of the companies, as Wired did by notifying Reddit, creates some accountability. But the genie is out of the bottle. The ease of use is the killer feature here, and it’s also the catastrophic flaw. Maybe the answer lies less in the code and more in a cultural shift—making the creation and sharing of this material as socially reprehensible as it should be. But that’s a tall order in the darker corners of the internet. For now, it seems like a grim new reality we all have to grapple with, where a photo of you or someone you know is no longer just a photo. It’s potential raw material.
