This week, Minnesota became the first state to pass a law banning nudification apps that make it easy to “undress” or sexualize images of real people.
Under the law, developers of websites, apps, software, or other services designed to “nudify” images risk extensive damages, including punitive damages, if a victim decides to sue. Their offending products could also be blocked in the state. Additionally, Minnesota’s attorney general could impose fines up to $500,000 per fake AI nude flagged. Any fines collected would be used to fund services for victims of “sexual assault, general crime, domestic violence, and child abuse,” the law stipulates.
On Wednesday, the Minnesota Senate unanimously voted 65–0 to pass the law. That vote came after the bill just as quickly passed in the House last week, the 19th News reported. Gov. Tim Walz is expected to sign the law when it reaches his desk, and if that happens, the state will start enforcing the ban this August.
Ars could not immediately reach Gov. Walz’s office for comment.
Minnesota man used one app to undress 80+ friends
Democratic Senator Erin Maye Quade introduced the bill in Minnesota after residents discovered that one man had nudified images of more than 80 women from his social circles. In a statement, she said that she looked forward to Walz signing the bill, which finally offers legal recourse to those victims, as well as others impacted by the mainstreaming of nudifying apps.
RAINN, the national nonprofit that runs the National Sexual Assault Hotline, also helped get Minnesota’s bill passed. To prevent any industry lobbying against it, RAINN consulted with tech companies when drafting the law, 19th News reported. That helped ensure there weren’t unexpected impacts on popular commercial products, like Photoshop, that could be used to nudify an image. Acknowledging that the state’s concern is more about how alarmingly easy undressing apps make it to harm an increasing number of mostly women and children globally, the law exempts products or services that require “the technical skill of a user to nudify an image or video.”
“Today, we led the nation protecting women, children, and everyone in public life from the harm caused by AI nudification technology,” Maye Quade said. “Companies that make this technology available for free online and in app stores will no longer be allowed to enable predators who abuse and victimize adults and children with the click of a button.”
Celebrating the law’s passage, Maye Quade thanked “the victim-survivors who made this bill a reality.”
“They have shared their story in committee, with reporters, and with law enforcement with dignity and courage,” she said. “Their power, brilliance, and advocacy is why we passed this bill today. They have had a singular focus on passing this legislation so that what happened to them does not happen to any Minnesotan, ever again.”
A lengthy CNBC report last September exposed how a group of Minnesota friends first learned that a mutual friend was creating fake nudes of dozens of women. The man apologized, but he seemingly did not help identify all the victims. There was no evidence he ever shared the images, so laws like the Take It Down Act did not apply, and proving the man’s ill intent made pursuing penalties under revenge porn laws unlikely, 19th News reported. Horrified that there was no way to ensure the images hadn’t left his computer and no path to stop the man from continuing to generate fake nudes, the women joined Maye Quade in advancing the law to shut down the problem at its source.
One of the Minnesota women targeted, Molly Kelley, told 19th News that she dedicated two years of her life to “finding a solution to mitigate the harm when it’s actually caused, which is at creation.”
“These images don’t exist without a third-party involvement and some sort of machine learning model,” Kelley said.
However, even if Walz signs the law, tensions remain that could frustrate enforcement.
Kelley told 19th News that she’s confident the law can overcome legal challenges, should any US firms sue to block it, but enforcing the law against app makers in other countries will likely be difficult, if not impossible for a single state. Notably, the service used to attack the Minnesota women, DeepSwap, is operated overseas, at times claiming bases in Hong Kong and Dublin, CNBC reported. Anticipated state struggles to regulate foreign apps is why a federal ban would be preferable, 19th News reported.
Additionally, if Donald Trump revives an effort to deregulate the AI industry by blocking state laws like Minnesota’s from requiring safeguards, the law could become toothless, advocates fear.
Unchecked US tools like Grok risk penalties
If Walz puts the law on the books, some US firms could be forced to make changes or face penalties.
Potentially even Elon Musk’s xAI may risk fines if Minnesotans can prove Grok was used to undress images without consent.
Grok’s lack of safeguards to prevent outputs with non-consensual intimate imagery or alleged child sex abuse materials have drawn government probes and proposed class actions from women and children. In January, X Safety claimed that Grok was updated to stop undressing images, but NBC News reported last month that their review found “dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk’s social media app, X, over the past month.”
Musk has denied that he has seen a single instance of Grok-generated CSAM. But researchers’ estimates that Grok was generating thousands of harmful images an hour appear to be increasingly backed by lawsuits from victims surfacing non-consensual images.
At the same time, authorities are getting closer to closing cases with arrests linked to Grok. A week after NBC News’ report, Nashville cops charged a man for “sexual exploitation of a minor after he was identified as the suspect who utilized Grok AI to generate images of child sex abuse.”
According to the press release, cops were tipped off after “multiple CyberTips to the National Center for Missing and Exploited Children regarding possession of child sex abuse material in an online account” that was linked to Grok. Importantly, the cops noted that Grok generated the harmful images from September 2025 through March 2026, well after X claimed that the functionality had been removed.
Beyond Grok, researchers have flagged thousands of nudifying apps advertised on Meta platforms, prompting at least one lawsuit where Meta claimed a Hong Kong-based app maker violated advertiser terms, CNBC reported. Any services based in the US openly advertising on Facebook or Instagram could become targets of Minnesota-based lawsuits if the law takes effect.
Similarly, nudifying apps that manage to skirt reviews and appear in Google and Apple app stores despite violating terms could draw legal attention.
xAI did not respond to Ars’ request for comment.







