The European Commission has launched a formal investigation into Grok, X’s chatbot, after the image-editing function of its built-in AI tool was widely used to virtually undress pictures of real women and underage girls without their consent.
As first reported by Germany's Handelsblatt, the probe will look at whether or not the social media platform did enough to mitigate the risk of the images being created and disseminated.
If X is found to have breached EU online platform rules under the bloc's Digital Services Act (DSA), the Commission could fine the company up to 6% of its global annual turnover.
Last December, the European Commission fined Elon Musk’s social network €120 million over its account verification tick marks and advertising practices.
The concerns emerged last summer after the platform's built-in AI tool, Grok, was enhanced with a paid feature known as “Spicy Mode”, which allowed users to prompt it to create explicit content.
Elon Musk mocked the ensuing outcry in a post on his X account.
Earlier this month, as worldwide outrage at the feature grew, a Commission spokesperson condemned this functionality in the strongest terms.
“This is not 'spicy'," they said. "This is illegal. This is appalling. This is disgusting. This has no place in Europe.”
In response to the public anger and alarm, X eventually implemented measures to prevent Grok from allowing the editing of images of real people to put them in revealing clothing and sexual situations – with restrictions applying to all users, including paid subscribers.
X also said that sexualised Grok-altered images of children had been removed from the platform and that the users involved in creating them had been banned.
“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” the X Safety account posted.
Musk history of European Legislation Breaches
This is not the first time Grok has been under scrutiny for suspected breaches of European law. Last November, the AI chatbot generated Holocaust denial content.
Investigations into the platform’s chatbot are currently ongoing in France, the United Kingdom and Germany, as well as in Australia. Grok has been banned altogether in Indonesia and Malaysia.
The Commission said it had sent a request for information under the DSA, and that it is still analysing the response.
Separately, the Commission has extended its ongoing formal proceedings opened against X in December 2023 to establish whether X has properly assessed and mitigated all systemic risks, as defined in the DSA, associated with its recommender systems, including the impact of its recently announced switch to a Grok-based recommender system.
What this means
If proven, these failures would constitute infringements of Articles 34(1) and (2), 35(1) and 42(2) of the DSA. The Commission will now carry out an in-depth investigation as a matter of priority. The opening of formal proceedings does not prejudge its outcome.
In preparing for this investigation, the Commission has closely collaborated with Coimisiún na Meán, the Irish Digital Services Coordinator. Further, Coimisiún na Meán will be associated to this investigation, pursuant to Article 66(3), as the national Digital Services Coordinator in the country of establishment in the EU.
Next steps
The Commission will continue to gather evidence, for example by sending additional requests for information, conducting interviews or inspections, and may impose interim measures in the absence of meaningful adjustments to the X service.
The opening of formal proceedings empowers the Commission to take further enforcement steps, such as adopting a non-compliance decision. The Commission is also empowered to accept any commitment made by X to remedy the matters subject to the proceeding.
The opening of formal proceedings relieves Digital Services Coordinators, or any other competent authority of EU Member States, of their powers to supervise and enforce the DSA in relation to the suspected infringements.
Henna Virkkunen, Executive Vice-President for Tech Sovereignty, Security and Democracy was abundantly clear on the criminal severity of the offence:
"Sexual deepfakes of women and children are a violent, unacceptable form of degradation. With this investigation, we will determine whether X has met its legal obligations under the DSA, or whether it treated rights of European citizens - including those of women and children - as collateral damage of its service."
The Commission is expected to make further announcements on the ongoing investigation, next month.
Background
Grok is an artificial intelligence (‘AI') tool developed by the provider of X. Since 2024, X has deployed Grok in its platform in various ways. These deployments, for example, enable users to generate text and images and to provide contextual information to users' posts.
As a designated very large online platform (VLOP) under the DSA, X has the obligation to assess and mitigate any potential systemic risks related to its services in the EU.
These risks include the spread of illegal content and potential threats to fundamental rights, including of minors, posed by its platform and features.

This investigation complements and extends the investigation launched on 18 December 2023, which focuses on the functioning of X's notice and action mechanism, its mitigation measures against illegal content, such as terrorist material, in the EU, and risks associated with its recommender systems.
These proceedings covered also the use of deceptive design, the lack of advertising transparency and insufficient data access for researchers, for which the
Commission adopted a non-compliance decision on 5 December 2025, fining X €120 million. On 19 September, the Commission sent to X a request for information related to Grok, including also questions in relation to the antisemitic content generated by @grok in mid-2025.
Help and support is available at national level for individuals who have been negatively affected by AI-generated images, including child sexual abuse material or non-consensual intimate images.