AI and Deepfakes
‘Add Blood, Forced Smile’: How Grok’s Nudification Tool Went Viral
The controversy has since spilled into the political arena. Regulators in the UK, EU, India, and the US issued urgent demands for action, while Britain’s media regulator Ofcom launched a formal investigation. Several UK MPs announced they were quitting X altogether, citing misogyny, AI-enabled sexual abuse, and weak safeguards.
What began as a seemingly trivial image-editing trend quickly spiralled into one of the most disturbing AI scandals of recent years. Elon Musk’s chatbot Grok, integrated into X, has come under intense global scrutiny after its image-generation tool was used to create non-consensual, sexualised images of women — and, in some cases, children — at industrial scale.
The trend, dubbed “put her in a bikini,” quietly emerged in late 2025 before exploding in early 2026. Within days, hundreds of thousands of requests were being made to Grok to digitally undress women from fully clothed photographs. These altered images were then posted publicly on X, making them instantly accessible to millions.
From “bikini edits” to graphic abuse
According to various reports, up to 6,000 bikini-related requests per hour were being made to the chatbot. What began as requests for swimwear escalated rapidly into demands for transparent clothing, sexualised poses, physical injuries, racial degradation, and graphic violence.
Women reported Grok complying instantly with prompts such as adding bruises, blood, restraints, and forced expressions. Some users went further, requesting explicit alterations of images involving minors — material that could be categorised as child sexual abuse imagery — which remained visible on the platform for days.
For many victims, speaking out only worsened the abuse. Women who reshared altered images to raise awareness found themselves targeted with increasingly extreme and humiliating content, highlighting how AI-powered harassment can scale faster than any traditional moderation system.
A slow response and mounting political backlash
Despite widespread outrage, it took more than a week for X to implement meaningful restrictions. By the time Grok’s public image-generation features were limited to paying users, thousands of degrading images had already flooded the platform. Even then, critics noted that the standalone Grok app continued to generate similar content.
The controversy has since spilled into the political arena. Regulators in the UK, EU, India, and the US issued urgent demands for action, while Britain’s media regulator Ofcom launched a formal investigation. Several UK MPs announced they were quitting X altogether, citing misogyny, AI-enabled sexual abuse, and weak safeguards.

Grok AI
A failure of guardrails — and governance
Reports suggest Musk personally ordered xAI to loosen safety restrictions on Grok last year, allegedly over concerns about “over-censoring.” Critics argue the episode exposes a broader failure by governments to regulate rapidly evolving AI tools — especially those capable of producing realistic, harmful content in seconds.
Unlike earlier deepfake technology, Grok AI’s nudification features required no specialist software or technical expertise, bringing abusive AI capabilities into the mainstream. As one victim told The Guardian, “The fact it is so easy shows these companies don’t care about the safety of women.”
The damage is already done
While X maintains that users generating illegal content will face suspension, many victims say enforcement came too late. The humiliation, emotional distress, and permanent digital footprint of the images cannot be undone.
As AI tools grow more powerful and accessible, the Grok scandal stands as a warning: without strong guardrails and real accountability, innovation can quickly become exploitation.


Pingback: X Down Again? Elon Musk’s Platform Faces Global Disruption