News
X Blocks Grok From Undressing Images of Real People Amid Global Backlash
Elon Musk’s AI chatbot Grok is facing sweeping restrictions after a wave of international criticism over its ability to generate and edit sexualized images of real people. In a statement posted on X, the platform confirmed that Grok will no longer be able to edit photos to depict real individuals in revealing clothing in jurisdictions where such content is illegal, marking a significant policy reversal.
The move follows mounting pressure from governments, regulators, and advocacy groups after Grok’s image-editing tools were used to create nonconsensual sexually explicit AI imagery, including content involving women and minors. Several countries have already taken direct action, including outright bans.
What X Has Changed—and Why
According to xAI, the company behind Grok, new geoblocking measures have been implemented to prevent users from editing images of real people into revealing attire such as bikinis, underwear, or similar clothing where local laws prohibit it. The restriction applies to all users, including paid subscribers.
In addition, xAI reiterated that image creation and editing features will be limited to paid users, arguing that this adds accountability and helps identify individuals attempting to abuse the platform. The company said the measures are designed to ensure compliance with local laws while curbing misuse of its AI tools.
However, early testing by journalists suggested enforcement gaps remained, raising questions about how quickly and effectively the new rules will be implemented.
Governments Step In Worldwide
The backlash has been swift and global. Malaysia and Indonesia became the first countries to block Grok entirely, while authorities in the Philippines said similar action could follow. Regulators in the UK and European Union are investigating potential violations of online safety laws, and France, India, and Brazil have issued warnings demanding stricter controls.
In the United States, California Attorney General Rob Bonta announced an investigation into Grok’s role in the spread of nonconsensual sexual images, calling harassment enabled by AI tools “unacceptable.” California already has laws shielding minors from AI-generated sexual content and requiring transparency from AI chatbots.
UK Technology Secretary Liz Kendall welcomed X’s policy shift but said investigations would continue, stressing that platforms must ensure services are “safe and age-appropriate.”
Critics Say the Damage Is Already Done
Campaigners and victims argue that the changes came too late. Journalists and academics whose images were manipulated using Grok have described feeling humiliated, unsafe, and violated. Advocacy groups say the episode highlights how quickly generative AI can be weaponized when safeguards lag behind innovation.
Experts also question how Grok will reliably determine whether an image depicts a real person and whether users could bypass geoblocks using tools like VPNs, which are commonly used to evade regional restrictions.
A Turning Point for AI Governance?
The Grok controversy underscores a growing global reckoning over AI-generated deepfakes, consent, and platform responsibility. While X has framed the new measures as a balance between free expression and legal compliance, regulators have made clear that self-regulation may no longer be enough.
As investigations continue, Grok’s rollback could become a defining moment in how governments force tech companies to confront the darker consequences of generative AI—before innovation outpaces accountability once again.

