OpenAI now faces a criminal probe after ChatGPT advised a gunman ahead of a mass shooting at a university in Florida, where two people were killed and six were wounded last year.
In a press release, Florida Attorney General James Uthmeier confirmed that the investigation into OpenAI’s potential criminal liability was launched after reviewing shocking chat logs between ChatGPT and an account linked to the suspected gunman, Phoenix Ikner.
The 20-year-old Florida State University student is currently awaiting trial “on multiple charges of murder and attempted murder,” Politico reported. At a press conference, Uthmeier revealed that the logs showed that ChatGPT provided “significant advice” before Ikner allegedly “committed such heinous crimes.” The attorney general emphasized that under Florida’s aiding and abetting laws, “if ChatGPT were a person,” it too “would be facing charges for murder.”
For OpenAI, the probe will test whether the company can be held criminally liable for ChatGPT’s outputs. In a statement provided to Ars, OpenAI’s spokesperson, Kate Waters, said that the company expects the answer to that question will be no.
“Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” Waters said.
But Uthmeier is not so sure, and that’s why Florida must urgently investigate. At the press conference, he noted that law enforcement is “venturing into uncharted territory” attempting to monitor criminal activity connected to AI tools. Uthmeier said that mounting chatbot-linked public safety risks—including suicide, child sexual abuse materials, fraud, and murder—must be thoroughly probed so that the public definitively knows if firms like OpenAI are liable for harms their products allegedly cause.
“Florida is leading the way in cracking down on AI’s use in criminal behavior,” Uthmeier said in the press release. “This criminal investigation will determine whether OpenAI bears criminal responsibility for ChatGPT’s actions in the shooting at Florida State University last year.”
ChatGPT accused of aiding and abetting
Uthmeier told press that ChatGPT advised the suspected shooter on what type of gun to use, the ammunition he should get, and whether or not a gun would be useful at short range. These facts would likely be easy to find online if a person were so motivated, but Uthmeier suggested that ChatGPT played a role that went deeper than the average browser search might go.
Troublingly, the chatbot also advised what time of day the most people would be on campus and where exactly on campus he might find higher populations of students gathered. Those insights show how AI can almost instantly combine public data in fresh ways that could have harmful, wide-sweeping impacts that firms like OpenAI should be detecting and mitigating, Florida officials seem to think.
To protect the public, Uthmeier issued subpoenas requesting more information, including a wide range of OpenAI’s policies and internal training materials. Demanding transparency, he’s intent on figuring out how ChatGPT is designed to navigate harmful use cases. Specifically, he wants to know when OpenAI decides to report “possible past, present and future crimes” planned using ChatGPT, the press release said.
Uthmeier stressed that he understood that ChatGPT is not a person and cannot be charged with aiding and abetting. But he said that OpenAI could be liable if the company was aware that such “dangerous behavior might take place” and failed to intervene. That’s why he has asked for organization charts outlining key leadership. He’s determined to find out “who knew what, designed what, or should have known what” was happening when bad actors attempt to plan crimes like the FSU shooting using ChatGPT.
If Florida officials discover that OpenAI leadership knew of criminal activity and prioritized profits over public safety, “then people need to be held accountable,” Uthmeier said.
“I’m a big believer in limited government,” Uthmeier said. “I believe government should only interfere in business activities when you have significant harm to our people. This is that.”
OpenAI cooperating with officials
Waters told Ars that OpenAI continues to cooperate with the authorities who are investigating the mass shooting and early on “identified a ChatGPT account believed to be associated with the suspect and proactively shared this information with law enforcement.”
The company maintains that ChatGPT did nothing more than surface information already accessible online and, therefore, it cannot be blamed for assisting the suspected gunman. As OpenAI tells it, unlike in lawsuits accusing ChatGPT of encouraging suicide and murder, ChatGPT did not urge the gunman to take any illegal or harmful actions.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the Internet, and it did not encourage or promote illegal or harmful activity,” Waters said.
However, Uthmeier said at the press conference that OpenAI had committed to taking additional steps to perhaps limit ChatGPT’s potential to be used to advise a mass shooting.
“Now OpenAI has indicated that they believe improvements and changes need to be made,” Uthmeier said. “I hope they’re right. I hope they’re right. We cannot have AI bots that are advising people on how to kill others.”
Waters did not comment on any updates to ChatGPT since the shooting, instead seeming to emphasize that the gunman’s use of ChatGPT was not typical.
“ChatGPT is a general-purpose tool used by hundreds of millions of people every day for legitimate purposes,” Waters said. “We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”






