Microblogging site X has accepted its mistake and assured it will comply with Indian laws after the IT Ministry warned the Elon Musk-led social media platform on the Grok AI obscene content issue, government sources said on Sunday.
Around 3,500 pieces of content have been blocked, and over 600 accounts deleted, according to sources.
X has accepted its mistake, and said it will comply with Indian laws, sources said, adding that in future, the platform will not allow obscene imagery.
This comes after the governments and regulators from Europe to Asia have condemned and some have opened inquiries into sexually explicit content generated by Elon Musk's xAI chatbot Grok on X, putting pressure on the platform to show what it is doing to prevent and remove illegal content.
Earlier, the government had asked X for details, including specific action taken on obscene content linked to Grok AI, and measures to prevent a repeat in future, after it found the response submitted by the platform to be inadequate.
In its response after the first notice was issued to it, X had outlined the strict content takedown policies it abides by when it comes to misleading posts and those related to non-consensual sexualised images.
While the reply was long and detailed, it had "missed" key information, including takedown details and specific action that was taken on the Grok AI obscene content issue, and measures to prevent it in future.
On January 2, the IT Ministry issued a stern warning to X over indecent and sexually-explicit content being generated through the misuse of AI-based services like 'Grok' and other tools.
X's 'Safety' handle, last Sunday, said it takes action against illegal content on its platform, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
"Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content," X had said, echoing the stance taken by Musk on illegal content.
Musk said earlier on X that anyone using Grok to make illegal content would suffer the same consequences as if they uploaded illegal content.
The European Commission extended on Thursday a retention order sent to X last year to retain and preserve all internal documents and data related to Grok until the end of 2026, amid concern over Grok-generated sexualised "undressed" images.
Britain's communications regulator Ofcom said on Monday it had made "urgent contact" with X and xAI and would make a swift assessment of whether the service was meeting its legal duties to protect users under the UK's Online Safety Act framework.
In France, government ministers said on January 2 they had referred sexually explicit Grok-generated content circulating on X to prosecutors and also alerted French media regulator Arcom to check the platform's compliance with the European Union's Digital Services Act.
Germany's media minister Wolfram Weimer called on the European Commission on Tuesday to take legal steps, saying EU rules provided tools to tackle illegal content and alleging the problem risked turning into the "industrialisation of sexual harassment".
Italy's data protection authority warned on Thursday that using AI tools to create "undressed" deepfake imagery of real people without consent could amount to serious privacy violations and, in some cases, criminal offences.
Swedish political leaders condemned on Thursday Grok-generated sexualised "undressing" content after reporting that imagery involving Sweden's deputy prime minister was produced from a user prompt.
Malaysia's communications regulator MCMC said on January 3 it would summon X and open an investigation into alleged misuse of Grok to generate obscene or sexualised "undressing" content, warning it may involve offences under Section 233 of Malaysia's Communications and Multimedia Act 1998.
Australia's online-safety regulator eSafety said on Wednesday it was investigating Grok-generated "digitally undressed" sexualised deepfake images, assessing adult material under its image‑based abuse scheme and noting current child-related examples it had reviewed did not meet the legal threshold for child sexual abuse material under Australian law.




