Lawsuit Claims AI Chatbot Played Role in Campus Shooting
The family of a victim killed during the 2025 Florida State University shooting has filed a federal lawsuit against OpenAI, alleging the accused gunman relied on ChatGPT conversations before carrying out the deadly attack.
According to court documents, the lawsuit was filed by the widow of Tiru Chabba, one of the two people killed during the shooting at the Tallahassee campus. The complaint claims suspected gunman Phoenix Ikner had repeated interactions with ChatGPT in the months and days leading up to the incident.
Attorneys involved in the case argue that the chatbot allegedly provided information related to firearms, mass shootings, and ways to gain media attention during violent attacks.
Questions Raised About AI Safety and Oversight
The lawsuit claims the chatbot failed to recognize warning signs that the user may have been planning violence. Lawyers allege the AI system responded to discussions about weapons, possible casualty numbers, and crowded campus locations instead of escalating concerns or blocking the interactions.
Investigators reportedly reviewed hundreds of exchanges connected to the suspect’s account. The filing argues that stronger safeguards and intervention systems could have prevented harmful conversations from continuing.
Florida officials have already launched investigations into the matter, with state authorities examining whether AI systems should face stricter oversight when users discuss violent acts or threats.
OpenAI has denied responsibility for the shooting, stating that ChatGPT only provided factual information available publicly online and did not encourage illegal activity.
Suspect Faces Murder Charges in Florida State Attack
Phoenix Ikner, a former Florida State University student, has pleaded not guilty to multiple murder and attempted murder charges tied to the campus shooting.
The April 2025 attack left two people dead and several others injured near the university student union building, shocking students and staff across the campus community.
The lawsuit adds to growing national debate over artificial intelligence safety, online accountability, and the responsibility technology companies may hold when digital tools are allegedly misused in violent crimes.







































