The European Union may soon ban nudify apps after Elon Musk’s chatbot Grok emerged as a prime example of the dangers of an AI platform failing to block outputs that sexualized images of real people, including children.
In a joint press release, the European Parliament’s Internal Market and Civil Liberties committees confirmed that lawmakers voted 101–9 (with 8 abstentions) to simplify the Artificial Intelligence Act and “propose bans on AI ‘nudifier’ systems.”
The vote came after the European Commission concluded earlier this year that the AI Act does not prohibit “AI systems that generate child sexual abuse material (CSAM) or sexually explicit deepfake nudes.” At that time, the Commission signaled that Parliament members were already proposing ways to amend the law to strengthen protections against such harmful content.
If the amendment passes, which seems likely, it would foil Elon Musk’s plan to blame users for harmful outputs. Earlier this year, xAI declined to introduce safeguards to block outputs, vowing to suspend and hold users legally accountable for any CSAM or non-consensual intimate imagery they generate. Instead, the feature was paywalled, limited to subscribers who could reportedly continue generating explicit content without the consent of real people whose images were fed into Grok.
In the US, xAI has seemingly faced few consequences for Grok’s outputs, but had the Take It Down Act been in play—it takes effect in May—the company could have risked billions in fines. It’s possible that Musk’s tactic of paywalling the feature and blocking Grok from spouting harmful outputs in response to prompts on X was intended to mitigate some of that risk ahead of that law’s enforcement.
But if the EU bans nudify apps, perhaps as early as August, Musk would finally be forced to intervene, fine-tuning Grok to be less “spicy” than Musk likely wants or else risking violating the AI Act. That could cost xAI too much at a time when competing with its biggest rivals in the AI race demands substantial investments, with possible fines of up to 7 percent of its total worldwide annual turnover.
Why officials want to go after platforms, not users
Officials “want to introduce a new ban on so-called ‘nudifier’ systems that use AI to create or manipulate images that are sexually explicit or intimate and resemble an identifiable real person without that person’s consent,” the press release explained. However, “the ban would not apply to AI systems with effective safety measures preventing users from creating such images,” officials said.
As Bloomberg noted, the ban would radically shift the EU’s approach to regulating explicit deepfakes, moving beyond just prosecuting users to also punishing platforms. The Grok scandal “epitomized” why such a regulatory shift was needed, Bloomberg reported, noting that “this amendment is the first” EU policy “to specifically target AI platforms” that produce and allow sharing of “sexual material without the subject’s consent.”
While EU officials did not directly mention Grok in the press release, regulators had already been probing the AI system while pondering the implications of xAI’s controversy for other, less visible nudify apps. Submitting questions to the European Commission earlier this year, lawmakers warned:
Recent shocking reports of AI-powered nudity applications, such as Grok on X, but also other tools that are freely available online, highlight an increase in AI-driven tools that allow users to generate manipulated intimate images of individuals without their consent, facilitating gender-based cyberviolence and the creation of child sexual abuse material.
“These systems should be banned from the EU market,” lawmakers urged, particularly since “individual perpetrators”—who “can often be punished under national criminal law”—“are often hard to find.” A more proactive plan, lawmakers suggested, would be to “prevent widespread image-based sexual violence from the outset.”
With apparent backing from Parliament members, the amendment’s likely passage is sure to frustrate Musk, who is also facing legal challenges in the US seeking injunctions against Grok’s nudify outputs. In January, a mother of one of Musk’s children, Ashley St. Clair, became one of the first victims to file a lawsuit. And more recently, three young girls in Tennessee filed a proposed class action representing all children harmed by Grok’s alleged CSAM outputs.
In the EU, similar public pressure is mounting for regulators to intervene, as xAI seems unwilling prevent Grok from undressing real people. A civil liberties committee member, Michael McNamara, said in the press release that he believes the proposal to ban nudify apps “is something that our citizens expect.”







