Elon Musk is under pressure to overhaul X’s AI tool Grok after users have been using the tool to create dehumanising content, including images of women undressing and sexualised images of children.
Women who have come across sexualised images of themselves made by Grok have described it as “dehumanising”.
Ofcom, the UK’s independent regulator for the media and communications industries, has made “urgent contact” with Elon Musk’s company following reports of how its AI tool Grok has been misused.
A spokesperson for the regulator said it was investigating concerns that Grok has been producing “undressed images” of people.
B&T has seen multiple examples of Grok users prompting the chatbot to alter real images to make women appear in bikinis without their consent, as well as putting them in sexual situations.
Images of Catherine, Princess of Wales, were among many to have been digitally de-clothed by Grok users on X.
There are also several posts form accounts asking users to undress women on Grok.
On Sunday, X issued a warning to users not to use Grok to create illegal content including child sexual abuse material.
Elon Musk posted to warn anyone asking Grok to generate illegal content would “suffer the same consequences” as if they uploaded it themselves. X’s policy prohibits “depicting likenesses of persons in a pornographic manner”.
The UK’s Internet Watch Foundation told the BBC it had received reports from the public relating to images generated by Grok on X. However, it reported publicly that it had so far not seen images which would cross the UK’s legal threshold to be considered child sexual abuse imagery.
The European Commission, the EU’s enforcement arm, said on Monday it was “seriously looking into this matter” and authorities in France, Malaysia and India were reportedly assessing the situation.
The issue illustrates concerns that online safety advocates have about AI being misused by users on social media platforms, and the lack of guardrails in place to protect women and children online.

