Britain’s media regulator, Ofcom, has launched a formal investigation into Elon Musk’s social media platform X, formerly known as Twitter, amid serious concerns that its built-in AI chatbot, Grok, has been used to generate sexually explicit and potentially illegal deepfake images. Reports indicate that some of the content involves minors, prompting widespread outrage among politicians, child protection advocates, and global regulators.
Ofcom confirmed on Monday that the inquiry will assess whether X has failed to comply with its obligations under the UK’s Online Safety Act, which requires digital platforms to protect users from harmful and illegal content. Investigators say the AI chatbot was used to produce undressed and sexualized images of people, potentially constituting intimate image abuse and child sexual abuse material (CSAM). Authorities emphasized that the material may breach both platform rules and UK law, under which generating and distributing non-consensual intimate AI images is now classified as a criminal offense.
Government officials and child protection organizations have expressed deep concern over the findings. Technology Secretary Liz Kendall said the investigation must proceed swiftly, highlighting the urgent need to protect victims and prevent further abuse. Ofcom has warned that serious non-compliance could result in fines, restrictions on advertising or payment services, or even a ban on access to the platform in the UK.
X and its AI developer, xAI, have defended themselves, stating that illegal content is removed and accounts violating rules are suspended. Elon Musk dismissed the regulatory pressure as an “excuse for censorship” but insisted that individuals using Grok to create illegal content would face consequences. Earlier this week, X limited Grok’s image-generation and editing features to paid subscribers, though the functionality reportedly remains accessible via Grok’s standalone app and website, a move critics say is insufficient.
The controversy has drawn international attention. Indonesia and Malaysia temporarily blocked access to Grok due to concerns over sexualized AI-generated images, becoming the first countries to take regulatory action against the tool. Other governments, including France and India, have demanded explanations or considered measures to curb harmful AI content.
The case highlights the legal and ethical challenges of AI-generated imagery. Under UK law, producing non-consensual intimate images — even digitally generated — is a criminal offense, and platforms must proactively reduce risks and swiftly remove illegal content. Ofcom’s investigation is ongoing, with potential enforcement actions expected to be announced in the coming days.