Florida AG Launches OpenAI Investigation Amid New Details About What the FSU Shooter Asked ChatGPT Before the Attack

Image credit: @AGJamesUthmeier/X

In the span of five days, the walls around OpenAI moved from four different directions — and not one of them has answered the question at the center of all of it.

On Sunday, attorneys for the family of Robert Morales, one of two people killed in last year’s mass shooting at Florida State University, announced plans to file a wrongful death lawsuit against OpenAI. They allege the accused gunman, Phoenix Ikner, was in constant communication with ChatGPT before the attack and that the chatbot may have advised him on how to carry it out.

On Monday, newly released chat logs obtained from the State Attorney’s office revealed hundreds of exchanges between Ikner and ChatGPT — including questions asked hours before the shooting about firearms, mass shooters, and when the FSU student union would be the busiest.

On Tuesday, OpenAI released a child safety blueprint in partnership with the National Center for Missing and Exploited Children.

On Thursday, Florida Attorney General James Uthmeier announced a formal investigation into OpenAI, with subpoenas forthcoming. He tied ChatGPT to the FSU shooting, to child sex abuse material, to predatory behavior targeting minors, and to national security concerns about AI technology reaching the Chinese Communist Party — all under one investigation.


A law firm, a newsroom, OpenAI itself, and a state attorney general — all circling the same set of facts, none of them with a verdict.

What the Chat Logs Actually Show

Until this week, the public knew Ikner had used ChatGPT. What no one outside the legal process had seen was what he actually said to it.

Court records list more than 270 ChatGPT conversations as exhibits. According to chat logs obtained and reviewed by WCTV, most were unremarkable — homework help, relationship advice, everyday questions. The shift came on the morning of April 17, 2025.

Ikner told ChatGPT he didn’t feel respected. He expressed suicidal thoughts. The chatbot mentioned the 988 Suicide and Crisis Lifeline.

Then the questions changed. He asked how mass shootings are covered by the media. What happened to other mass shooters. Whether most school shooters are convicted. When the FSU student union would be the busiest. ChatGPT told him the union is busiest during lunch hour — between 11:30 a.m. and 1:30 p.m.

Three minutes before he opened fire, Ikner asked how to take the safety off a shotgun. ChatGPT answered with a detailed description.

The shooting began just before noon. Robert Morales, 57, an Aramark worker and father, was killed. Tiru Chabba, 45, a father from South Carolina, was killed. Six others were wounded.

The Gap Between Action and Answers

OpenAI said after the shooting that it identified a ChatGPT account believed to be associated with the suspect, proactively shared information with law enforcement, and cooperated with authorities. This week, its child safety blueprint outlined a framework for embedding safeguards directly into AI systems.

Uthmeier’s investigation folds the FSU shooting into a broader case that includes child exploitation and foreign adversary data concerns. He called on the legislature to expand his office’s authority over AI — during an election year in which he’s running for a full term. Separately, Florida Congressman Jimmy Patronis renewed his push for the PROTECT Act, a bill filed in January to repeal Section 230, after learning of the Ikner chat logs.

None of that changes what’s in the chat logs. But it does raise a question about what this investigation is built to do — whether it’s a legal mechanism that will produce findings and accountability, or a political frame broad enough that no one can push back on any single part without appearing to defend all of it.

The Morales family’s lawsuit hasn’t been filed yet. Ikner’s trial is set for October. No court has determined whether ChatGPT bears legal responsibility.

What a Chatbot Is Supposed to Do

Image credit: Jernej Furman, from Wikimedia Commons, licensed under Creative Commons Attribution 2.0 Generic (CC BY 2.0)

The chat logs don’t show ChatGPT telling Phoenix Ikner to walk into the student union. They show something that may be harder to regulate and harder to live with — a machine that processed a sequence of questions about dying, about weapons, about where to find the most people, and treated each one as if it existed alone.

Robert Morales went to work that morning. Tiru Chabba was visiting a campus. The answer to a question about the busiest time at the student union had already been given before either of them arrived.

A year later, four institutions moved in a single week. None of them have explained what a chatbot is supposed to do when the conversation turns. The families of the dead are still waiting for someone who can.