News & Analysis
OpenAI Sued Over FSU Mass Shooting: Inside the Chabba Lawsuit
Published May 12, 2026
The family of Tiru Chabba, killed in the April 2025 mass shooting at Florida State University, filed a federal complaint on May 10, 2026 against OpenAI and the alleged shooter. The 76-page pleading alleges ChatGPT helped Phoenix Ikner plan the attack over a period of months and pleads eight counts — including a negligent entrustment theory that, if it survives, would be the first significant application of that doctrine to AI access.
For the central hub covering all current OpenAI litigation, see OpenAI Lawsuits. For the broader doctrinal framework these AI-chatbot cases raise — the product-versus-content framing question, Section 230, the duty-to-warn analysis under Tarasoff, and the lessons from Garcia v. Character Technologies — see the companion article on the broader OpenAI school shooting lawsuits. For the parallel California wrongful death suit alleging ChatGPT gave fatal drug interaction advice, see the Scott overdose lawsuit. This piece focuses on the Florida complaint specifically.
This article is general educational commentary, not legal advice. It does not evaluate the merits of the lawsuit, predict outcomes, or create an attorney-client relationship. The complaint reflects plaintiffs' allegations; nothing has been proven and the case is at an early stage.
What happened on April 17, 2025
Phoenix Ikner, a Florida State University student, opened fire near and inside the FSU student union shortly before noon on April 17, 2025. Two people were killed: Tiru Chabba, a 45-year-old regional vice president for Aramark who was on campus as a business invitee, and Robert Morales, the FSU campus dining director. Five others were seriously wounded. Law enforcement injured and detained Ikner shortly after he exited the building.
Chabba left behind his wife Vandana Joshi and two minor children. Joshi, as personal representative of his estate, is the named plaintiff in the federal lawsuit on behalf of the spouse, the two surviving children, and the estate.
The lawsuit at a glance
The complaint, captioned Joshi v. OpenAI Foundation et al., Case No. 4:26-cv-00222-MW-MJF, was filed in the U.S. District Court for the Northern District of Florida, Tallahassee Division. The OpenAI defendants include the nonprofit OpenAI Foundation (the parent entity, formerly OpenAI, Inc.) and ten for-profit and management subsidiaries. Phoenix Ikner is also named as an individual defendant. Notably, Microsoft is not a named defendant despite being identified in the pleading as exerting pressure on OpenAI to ship products faster.
Plaintiffs' counsel are Osborne Francis & Pettis in Florida and the Strom Law Firm in South Carolina, with civil rights attorney Bakari Sellers on the trial team. Federal jurisdiction is asserted under diversity (the Chabba family is in South Carolina; OpenAI defendants are predominantly in California and Delaware) with supplemental jurisdiction over the state-law claims.
The eight counts
- Count I: Negligence against the OpenAI defendants.
- Count II: Gross negligence against the OpenAI defendants, with punitive damages sought.
- Count III: Strict products liability — defective design.
- Count IV: Strict products liability — negligent design.
- Count V: Strict products liability — failure to warn.
- Count VI: Negligent entrustment against the OpenAI defendants.
- Count VII: Battery against Phoenix Ikner only.
- Count VIII: Wrongful death under Florida Statute ยง 768.16-768.26 against all defendants.
The complaint seeks compensatory damages, punitive damages on the gross negligence count, litigation costs, and a jury trial.
What the complaint alleges about the chat history
The factual core of the complaint is the chat history between Ikner and ChatGPT, which the pleading describes as having been obtained by Florida law enforcement after the shooting. The allegations are specific and unusual in the level of detail they describe. None of these allegations have been proven.
According to the complaint, Ikner uploaded photos of his stepmother's firearms, and ChatGPT identified them as a Glock 9mm handgun and a Remington 12-gauge shotgun and explained how to operate them, including that the Glock had no safety. Minutes before the attack, while in the FSU parking garage, Ikner allegedly used ChatGPT to learn how to load and operate the shotgun. The shotgun then failed to discharge during the attack, and Ikner returned to his vehicle for the Glock, which he used to kill Chabba and Morales and wound others.
The complaint further alleges that ChatGPT discussed prior mass shootings with Ikner, including Columbine, Virginia Tech, the Covenant School shooting, and a prior FSU shooting, and at one point told him the Columbine shooters "were not terrorists, per se." It alleges ChatGPT gave Ikner a detailed breakdown of casualty thresholds that typically trigger national media coverage, and that, in response to Ikner's question about peak hours at the FSU student union, the chatbot identified weekday 11:30 a.m. to 1:30 p.m. as the busiest period — which is when the attack occurred.
The pleading also alleges Ikner repeatedly raised suicide with the chatbot and that ChatGPT provided the standard suicide-hotline referral only twice in response to numerous such queries. Plaintiffs use these allegations to support both the products-liability counts (the chatbot didn't behave as a reasonable user would expect) and the negligent entrustment count (the patterns should have triggered access restrictions).
Negligent entrustment is the doctrinally novel claim
Negligent entrustment, as a tort, traditionally requires that an owner give control of a dangerous instrumentality — usually a vehicle or firearm — to someone the owner knew or should have known was unfit to use it safely. The classic cases involve a car owner who lends a car to an intoxicated friend, or a parent who gives a firearm to a child known to be reckless.
The Chabba complaint applies that doctrine to ChatGPT account access. The argument runs: OpenAI controlled access to ChatGPT, OpenAI's safety operations should have detected Ikner's escalating use patterns from the chat record itself, and continued provision of access in the face of those patterns constituted negligent entrustment.
As a doctrinal matter, this is a stretch. Negligent entrustment cases typically involve a discrete transfer of a tangible item to a specific individual the entrustor knows. Account access to a mass-market software product is structurally different — OpenAI didn't hand Ikner anything; he signed up. But the theory tracks the same underlying point that animates the account-enforcement allegations in the Canadian-shooting cases filed against OpenAI last month: that OpenAI controlled access, had information about how the system was being used, and didn't restrict access when the patterns warranted restriction.
Whether courts extend negligent entrustment to AI access is a genuinely open question. If even one court does, the doctrine becomes a meaningful new front in AI liability, because it doesn't require the plaintiff to win the harder fight about whether the chatbot's output is a "product" in the traditional sense. It only requires that access to the chatbot be treated as something the provider can be held accountable for granting.
Why Section 230 matters here, and how the complaint preempts it
Section 230 of the Communications Decency Act is the federal law that has, for nearly thirty years, shielded internet companies from being sued over what their users post. It's the reason Facebook isn't liable when a user defames someone in a comment, and the reason Yelp isn't liable when a reviewer makes a false claim about a restaurant. The statute treats the platform as a host of third-party speech, not the speaker, and that distinction has been the single most important legal protection the modern internet relies on.
OpenAI will almost certainly invoke Section 230 as a defense. The argument writes itself: ChatGPT is a platform, the user typed the prompt, and any harmful output emerged in response to what the user asked for. Under that framing, OpenAI says it shouldn't be treated as the "speaker" of ChatGPT's responses, and Section 230 immunity should dispose of the case.
The Chabba complaint heads that argument off in paragraphs 56 through 59. Plaintiffs argue Section 230 doesn't fit ChatGPT because OpenAI isn't a passive host of someone else's content the way Facebook or Yelp is. OpenAI built the model, chose the training data, designed the responses the model produces, and is therefore an "information content provider" itself — a category the statute specifically does not protect. If the court accepts that framing, Section 230 falls away and the case proceeds on the merits. If the court rejects it, the case likely ends at the motion-to-dismiss stage.
That's why this is the first real fight, and why plaintiffs put the argument in the complaint instead of waiting to brief it later. The Section 230 question isn't a procedural sidebar — it's the question that decides whether AI companies can be sued at all for what their chatbots produce. There's some early support for the plaintiffs' position. In Garcia v. Character Technologies, a federal court in Florida declined to dismiss similar claims against Character.AI on Section 230 grounds, treating the chatbot's output as something other than third-party content. The Northern District of Florida isn't bound by that decision, but it will be cited heavily.
Bottom line for the reader: if Section 230 protects ChatGPT the way it protects Facebook, OpenAI almost certainly wins this case early. If it doesn't — if courts treat chatbot output as the company's own product rather than user-generated content — the door opens for a whole category of AI lawsuits to move forward. The Chabba case is one of the first places that question gets asked in court.
Microsoft is not a defendant, but is named throughout
The complaint repeatedly identifies Microsoft as exerting pressure on OpenAI to release products faster, citing reporting that the head of Microsoft's AI division yelled at an OpenAI employee during a video conference about delivery timelines. Microsoft has invested roughly $13 billion in OpenAI and holds approximately 27% of the public benefit corporation, valued at around $135 billion.
Microsoft's absence from the defendant list is a strategic choice. Adding Microsoft would complicate diversity jurisdiction (Microsoft is headquartered in Washington State but operates nationally), expand discovery, and invite a wave of motions that could delay early proceedings. Keeping Microsoft as a non-party bad actor whose pressure is part of the narrative — without the procedural baggage of suing them — gives plaintiffs a useful story for the jury without the cost of an added defendant. Whether Microsoft gets added later as discovery develops is worth watching.
The Florida criminal investigation tracks alongside
Florida Attorney General James Uthmeier had previously announced a criminal investigation into OpenAI and ChatGPT related to the same shooting. The complaint quotes Uthmeier as stating that "if ChatGPT were a person, it would be facing charges for murder." Criminal investigations operate under different standards from civil claims and don't establish wrongdoing, but parallel criminal and civil proceedings can affect discovery practice and create pressure to coordinate disclosures, which can influence settlement dynamics.
OpenAI's response
Company spokesperson Drew Pusateri called the FSU shooting a tragedy and said ChatGPT is not responsible. OpenAI has said it trains its models to refuse requests that could meaningfully enable violence and that it notifies law enforcement when conversations suggest an imminent and credible risk of harm to others, with mental health experts helping assess borderline cases. Those statements preview OpenAI's likely defense posture: that its safety operations meet a reasonable standard of care, that any specific outputs were misuse of a product designed to refuse such requests, and that no chatbot output proximately caused the shooting.
Whether the discovery record bears that out is the contested question. The chat logs Florida law enforcement reportedly obtained will be central to that fight. If the chats are as the complaint describes, OpenAI's "the model refuses such requests" defense becomes harder. If the chats are more ambiguous or selectively excerpted, the defense holds more firmly.
What to watch next
The next significant filings will be the OpenAI defendants' responses, almost certainly motions to dismiss raising Section 230 immunity, First Amendment protections, lack of duty, and arguments that ChatGPT output isn't a "product" under Florida products liability law. The negligent entrustment count will likely draw a separate motion to dismiss arguing the doctrine doesn't extend to mass-market software access.
Three things to watch beyond the motion practice. First, whether the chat logs become public through court filings or discovery, and what they show. Second, whether plaintiffs amend to add Microsoft once early discovery develops. Third, whether other families of FSU shooting victims or of victims in other AI-adjacent harm cases file similar complaints, and whether plaintiffs' counsel begin coordinating across jurisdictions.
Bottom line
The Chabba complaint is structurally different from the Canadian-shooting lawsuits in ways that matter for how the case will likely unfold. It leans less on specific internal-knowledge allegations and more on system-wide negligence framed through products-liability and the novel negligent entrustment theory. The case will be decided, if it isn't settled, on whether courts are willing to extend traditional tort doctrines to a software product whose outputs are generated rather than hosted — the same framing question every AI liability case now turns on.
Sources and further reading
- Lawsuit Informer: OpenAI Lawsuits — ChatGPT Wrongful Death, Self-Harm, and Mass Shooting Claims
- Joshi v. OpenAI Foundation et al. (Complaint, N.D. Fla., May 10, 2026)
- Reuters: Family of Florida mass shooting victim sues OpenAI in U.S. court
- WLRN: Family of FSU shooting victim files lawsuit naming gunman and OpenAI
- Florida Attorney General: Criminal investigation announcement involving OpenAI and ChatGPT
- Lawsuit Informer: OpenAI Faces School Shooting Lawsuits — AI Liability Questions
Affected by harm involving ChatGPT? If you or a family member experienced serious harm following sustained ChatGPT use, you can request a free case review through Lawsuit Center. Reviews are conducted by participating legal professionals and intake partners. Submitting a request does not create an attorney-client relationship.
Educational commentary only. Not legal advice. No attorney-client relationship is created.