Wikipedia, Representational

Global Backlash Forces Elon Musk’s Grok to Stop Generating Undressed Images of Real People on X

X has introduced new safeguards on Grok after global backlash over AI-generated sexualised deepfakes, amid rising regulatory scrutiny.

Supported by

Amid intensifying global concern over the rise of sexualised AI deepfakes, Elon Musk’s artificial intelligence chatbot Grok will no longer edit or generate images of real people in revealing clothing on X, the social media platform formerly known as Twitter.

The decision, confirmed by X on Wednesday, follows widespread outrage in countries including the UK and the US after Grok was found responding to user prompts that digitally undressed images of adults and, in some alleged instances, children.

In response, X and its parent company xAI have introduced new technological safeguards that apply to all users, including paid subscribers, while also restricting Grok’s image generation tools to X Premium accounts. Governments and regulators have stepped up scrutiny, with investigations underway in the UK and parts of the US, warning that platforms must act swiftly to curb non-consensual and illegal AI-generated content.

Musk has denied knowledge of any underage nude images produced by Grok and reiterated that the chatbot is designed to refuse illegal requests and comply with local laws.

Global Outrage Forces X to Rein in Grok

The controversy erupted after journalists and users flagged that Grok, integrated directly into X, was complying with prompts to manipulate photographs of real people, effectively removing clothing and creating sexualised imagery.

This sparked global condemnation, particularly from child safety advocates and digital rights groups, who warned that such tools could normalise non-consensual exploitation and deepen harm caused by deepfake technology.

Responding to the backlash, X’s Safety team announced that it had “implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis.”

The company clarified that the restriction applies universally, including to paid subscribers, signalling a shift from earlier assumptions that premium access alone could curb misuse. X also reiterated its enforcement stance, stating that it takes action against illegal content, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending offending accounts, and cooperating with local governments and law enforcement agencies.

“Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the platform warned, underscoring that responsibility lies both with users and the company hosting the technology.

Regulatory Pressure and AI Safeguards

The move by X comes against the backdrop of mounting regulatory pressure worldwide. In the UK, communications regulator Ofcom has reportedly begun examining whether X’s safeguards around AI-generated imagery comply with existing and forthcoming online safety laws, particularly as new legislation criminalising non-consensual intimate deepfakes is expected to be enforced.

In the United States, California’s Attorney General has sought information from platforms deploying generative AI tools, amid concerns that such technologies could facilitate sexual exploitation, harassment, and the creation of abusive content involving minors.

Within the last week, xAI restricted Grok’s image generation features on X to paying X Premium subscribers, citing the need for greater accountability. However, reports, including those by CNN, observed that even for premium users, Grok’s responses to image generation requests had changed noticeably in recent days, suggesting that internal moderation rules were being tightened in real time.

Critics, however, argue that partial restrictions and paywalls are insufficient if loopholes remain through standalone apps or alternative prompts. Some countries in Southeast Asia have reportedly gone as far as blocking access to Grok altogether, reflecting the uneven global response to AI governance and the growing demand for stronger, enforceable safeguards.

The Logical Indian’s Perspective

The Grok episode is a stark reminder that technological progress without ethical foresight can quickly spiral into social harm. While X’s decision to restrict sexualised image editing is a necessary corrective step, it also exposes a recurring pattern in the tech world: safeguards are often implemented only after public outrage and regulatory threats.

At The Logical Indian, we believe that innovation must be guided by empathy, responsibility, and respect for human dignity, especially when tools have the power to violate consent and traumatise vulnerable individuals, including children. AI developers and platform owners must move beyond reactive damage control and embed safety, transparency, and accountability into the very design of their systems.

#PoweredByYou We bring you news and stories that are worth your attention! Stories that are relevant, reliable, contextual and unbiased. If you read us, watch us, and like what we do, then show us some love! Good journalism is expensive to produce and we have come this far only with your support. Keep encouraging independent media organisations and independent journalists. We always want to remain answerable to you and not to anyone else.

Leave a Reply

Your email address will not be published. Required fields are marked *

Featured

Amplified by

Ministry of Road Transport and Highways

From Risky to Safe: Sadhak Suraksha Abhiyan Makes India’s Roads Secure Nationwide

Amplified by

P&G Shiksha

P&G Shiksha Turns 20 And These Stories Say It All

Recent Stories

From Risky to Safe: Sadhak Suraksha Abhiyan Makes India’s Roads Secure Nationwide

Calling Husband ‘Paaltu Chuha’ and Forcing Him to Leave Parents Is Cruelty: Chhattisgarh HC

NEET-PG Cut-off Lowered to Fill Vacant Medical Seats, Sparking Debate among Doctors Nationwide

Contributors

Writer : 
Editor : 
Creatives :