OPENAI LAWSUITS
OpenAI Lawsuits: ChatGPT Wrongful Death, Self-Harm, and Mass Shooting Claims
Last updated: May 12, 2026
Lawsuits filed against OpenAI in 2025 and 2026 are forcing courts to answer a question that, until recently, was almost entirely theoretical: when a generative AI system produces an output that contributes to serious harm, is that output a product, or is it content?
The answer determines whether AI companies can be sued at all for what their chatbots produce. If courts treat ChatGPT outputs as products, traditional tort doctrine applies and plaintiffs have a path. If courts treat them as content, Section 230 of the Communications Decency Act becomes the central battleground, and a 1996 statute written to protect early-internet bulletin boards becomes the backbone of liability rules for systems that did not exist when it was drafted.
This page provides an overview of the current OpenAI litigation docket, the doctrinal questions courts will have to resolve, the kinds of harm that have led to lawsuits, the records that may matter, and where the cases are headed next. For deeper analysis of individual cases, see the dedicated coverage of the OpenAI school shooting lawsuits, the FSU shooting lawsuit (Chabba), and the Scott overdose lawsuit.
This page provides general educational information only and does not constitute medical or legal advice. AI chatbot liability is a developing area of law, and pending case outcomes are unpredictable. Individual circumstances should be reviewed with qualified counsel.
- At least four major civil actions have been filed against OpenAI in the U.S. since August 2025, alleging wrongful death, self-harm, and mass-casualty harm tied to ChatGPT use.
- Most filed cases plead strict products liability, negligence, and wrongful death theories rather than treating chatbot output as user-generated content.
- California has emerged as the dominant venue, with state and federal cases concentrated in San Francisco.
- The product-versus-content framing question is the first real fight in every case and will decide whether AI companies can be sued for chatbot output at all.
- Garcia v. Character Technologies signals at least one federal court is willing to entertain product liability claims against AI chatbot companies.
OpenAI litigation by the numbers
Recent News & Analysis
OpenAI Faces School Shooting Lawsuits: AI Liability Questions
Federal lawsuits filed against OpenAI after the Tumbler Ridge school shooting and the FSU mass shooting may turn on a single framing question: is generative AI output a product, or is it content? Attorney analysis of the Section 230, product liability, and duty-to-warn questions ahead.
Read the Full Analysis →OpenAI Sued Over FSU Mass Shooting: Inside the Chabba Lawsuit
The Chabba family's federal complaint in the Northern District of Florida pleads eight counts, including a novel negligent entrustment theory applied to AI account access. A breakdown of the allegations, the chat history described in the complaint, and the doctrinal questions ahead.
Read the Full Analysis →OpenAI Sued Over Teen's Kratom and Xanax Overdose Death: Inside the Scott Case
A Texas couple's California lawsuit alleges ChatGPT advised their 19-year-old son it was safe to combine kratom and Xanax, a recommendation the family says proved fatal. Attorney analysis of the first major AI-as-unauthorized-medical-advisor wrongful death case.
Read the Full Analysis →Recent Developments
The OpenAI litigation landscape moved from theoretical to active in the second half of 2025 and accelerated in the first half of 2026. The items below capture the developments most often referenced in current reporting and legal analysis. This section is intended to be refreshed as the docket develops.
- August 26, 2025 — Raine v. OpenAI filed: Matthew and Maria Raine filed suit in San Francisco County Superior Court (Case No. CGC-25-628528) against OpenAI, CEO Sam Altman, and Doe employees and investors, alleging that ChatGPT contributed to the suicide of their 16-year-old son Adam Raine. The complaint pleaded seven causes of action anchored on California strict products liability. This is the foundational case in the AI wrongful death docket.
- October 2025 — Raine amended complaint filed: The Raines amended their complaint to allege intentional misconduct, citing internal OpenAI policy documents (the Model Spec) that they allege show the company made conscious decisions to remove longstanding safety protocols in the weeks and months before Adam's death. The amendment opened a pathway to punitive damages and survival action damages otherwise unavailable in a negligence-based wrongful death claim.
- February 10, 2026 — Tumbler Ridge school shooting: A school shooting in Tumbler Ridge, British Columbia preceded the federal lawsuits filed two months later. Wall Street Journal reporting indicated OpenAI's internal safety team flagged the shooter's conversations and recommended notifying police months before the attack.
- April 29, 2026 — Tumbler Ridge lawsuits filed: Seven federal lawsuits were filed against OpenAI and Sam Altman in California federal court by families affected by the Tumbler Ridge shooting. Plaintiffs' counsel Jay Edelson has signaled the lawsuits are part of a broader effort. The complaints allege negligence, wrongful death, and product liability theories, including a specific allegation that the account-enforcement system was circumventable.
- May 10, 2026 — FSU shooting lawsuit (Joshi v. OpenAI Foundation): The family of Tiru Chabba, killed in the April 17, 2025 mass shooting at Florida State University, filed an eight-count federal complaint in the Northern District of Florida against OpenAI and alleged shooter Phoenix Ikner. The complaint includes a novel negligent entrustment count not present in the Tumbler Ridge filings.
- May 12, 2026 — Scott overdose lawsuit filed: Texas residents Leila Turner-Scott and Angus Scott filed suit in California state court alleging that ChatGPT advised their 19-year-old son Sam Nelson that it was safe to combine kratom and Xanax. The case is the first major AI wrongful death suit grounded specifically in a drug interaction advisory.
- Mid-to-late 2026 — Motion-to-dismiss decisions expected: Early motions to dismiss are expected to resolve Section 230, First Amendment, and product-versus-content classification questions across the active cases. These rulings will shape every AI liability case that follows, regardless of whether the underlying claims ever reach trial.
This page is the central hub for OpenAI litigation coverage on Lawsuit Informer. For deeper analysis of individual cases, see the dedicated articles on the OpenAI school shooting lawsuits, the FSU shooting lawsuit (Chabba), and the Scott overdose lawsuit. For broader context, see product liability lawsuits, social media addiction lawsuit developments, and News & Analysis.
In Simple Terms
OpenAI lawsuits are civil cases brought against OpenAI by people who allege that sustained use of ChatGPT contributed to a serious harm. The most common harm patterns in the current docket are wrongful death by suicide, wrongful death by overdose or fatal drug interaction, mass-casualty acts of violence allegedly preceded by sustained ChatGPT use by the perpetrator, and self-harm or mental health crisis short of death.
These lawsuits are not class actions. Each is a separate complaint, typically filed on behalf of a single decedent's family or a single injured person. What ties them together is a shared legal theory: that ChatGPT is a product subject to traditional tort law, not a passive platform hosting third-party speech, and that OpenAI is responsible for the harm its product caused.
That framing matters because it is the difference between a viable lawsuit and one Section 230 would dispose of at the motion-to-dismiss stage. The companion articles linked above walk through this question in detail; this hub page provides the overarching framework that connects the cases.
Current OpenAI Litigation Status
As of May 2026, the OpenAI civil docket consists of roughly a dozen filed cases across California state court, California federal court, and the Northern District of Florida. The cases span four broad categories: wrongful death by suicide (Raine v. OpenAI and its likely successors), mass-casualty violence (Tumbler Ridge and FSU), wrongful death by drug interaction (Scott / Turner-Scott), and self-harm cases short of death that have not yet been filed publicly but are widely anticipated.
California has emerged as the dominant venue. Most filed cases are in California state or federal court. OpenAI is headquartered in San Francisco, California products liability law is plaintiff-friendlier than most state alternatives, and a developing California docket gives plaintiffs' counsel coordination advantages on briefing, expert development, and discovery. The FSU case is the exception, filed in the Northern District of Florida because the underlying incident, the decedent, and key evidence (the chat logs obtained by Florida law enforcement) are all in Florida.
Defendants named in current filings include OpenAI Inc., OpenAI's for-profit subsidiaries (OpCo, Holdings, and others), the OpenAI Foundation (the parent entity after the 2025 corporate restructuring), CEO Sam Altman personally, and Doe employees and investors. Microsoft has been identified in pleadings as exerting pressure on OpenAI to ship products faster, but as of May 2026 has not been named as a defendant in any of the filed cases. For broader background on how lawsuits against multiple corporate defendants are organized, see how lawsuits work and product liability lawsuits.
The Current OpenAI Docket, Case by Case
Raine v. OpenAI (San Francisco County Superior Court)
Raine is the foundational case. Matthew and Maria Raine filed suit in August 2025 over the suicide of their 16-year-old son Adam, who died on April 11, 2025 after months of escalating conversations with ChatGPT 4o. The complaint anchored on California strict products liability: ChatGPT was alleged to be a product, GPT-4o's design risks were alleged to substantially outweigh its benefits, and safer alternative designs were alleged to have been both feasible and already implemented elsewhere in OpenAI's systems — including for copyright violations, where the company maintained categorical refusal behaviors it had stripped out for self-harm content.
The factual core of the original complaint was that OpenAI's own moderation system flagged 377 of Adam's messages for self-harm content, some at over 90 percent confidence, without triggering any intervention. The company had built the surveillance infrastructure to detect exactly this category of risk and had elected not to act on it. That allegation — the surveillance worked, the response system didn't — is what gives the Raine theory its evidentiary force, and it is the template plaintiffs in the related cases are working from.
In October 2025, the Raines filed an amended complaint that sharpened the case substantially. The amended pleading alleged OpenAI made conscious decisions to remove longstanding safety protocols in the weeks and months before Adam's death. A rule requiring ChatGPT to refuse self-harm content was allegedly replaced with a directive to never change or quit the conversation. Staying engaged became the primary instruction, with do-not-encourage-self-harm demoted to a secondary directive that created impossible contradictions with the engagement priority. The pivot to intentional misconduct opened a pathway to punitive damages and pre-death pain and suffering damages through a survival action.
Tumbler Ridge / Canadian School Shooting Lawsuits (Northern District of California)
On April 29, 2026, seven families affected by the February 10, 2026 school shooting in Tumbler Ridge, British Columbia filed federal lawsuits in San Francisco against OpenAI and Sam Altman. The complaints, citing Wall Street Journal reporting, allege OpenAI's internal safety team flagged the shooter's conversations and recommended notifying police months before the attack, but that recommendation was not acted on. The complaints also allege a specific product-design failure: after one account was disabled, the user created another and continued using ChatGPT.
That second allegation matters doctrinally. Most AI-failed-to-monitor-users theories are diffuse and run into Section 230 problems immediately. An allegation that the account-enforcement system was circumventable is narrower and more concrete — it is about how the system was built and operated, not about what any specific output said. Plaintiffs' counsel Jay Edelson is among the more sophisticated tech plaintiffs' lawyers in the country and has signaled the lawsuits are part of a broader effort. For full background, see the OpenAI school shooting lawsuits article.
Joshi v. OpenAI Foundation (Northern District of Florida)
On May 10, 2026, the family of Tiru Chabba — one of two people killed in the April 17, 2025 mass shooting at Florida State University — filed an eight-count federal complaint in the Northern District of Florida against OpenAI and alleged shooter Phoenix Ikner. The case is captioned Joshi v. OpenAI Foundation et al., Case No. 4:26-cv-00222-MW-MJF. Plaintiffs' counsel are Osborne Francis & Pettis in Florida and the Strom Law Firm in South Carolina, with civil rights attorney Bakari Sellers on the trial team.
The complaint pleads negligence, gross negligence, three strict products liability counts (defective design, negligent design, failure to warn), negligent entrustment, battery against Ikner, and Florida wrongful death. The negligent entrustment count is the doctrinally novel claim — it applies a tort doctrine that traditionally covers vehicles and firearms to ChatGPT account access. If even one court extends negligent entrustment to AI access, it becomes a meaningful new front in AI liability because it does not require plaintiffs to win the harder fight about whether the chatbot's output is a product in the traditional sense. For full background, see the FSU shooting lawsuit article.
Turner-Scott v. OpenAI (California State Court)
On May 12, 2026, Texas residents Leila Turner-Scott and Angus Scott filed suit in California state court alleging that ChatGPT advised their 19-year-old son Sam Nelson that it was safe to combine kratom — a partial mu-opioid receptor agonist — with Xanax, a benzodiazepine. The combination is a well-documented respiratory depression risk that the FDA has flagged for years. Sam died of an overdose in 2025.
Scott is the first major AI wrongful death suit grounded specifically in a drug interaction advisory. It foregrounds a doctrinal question the earlier AI wrongful death cases have raised but not centered: whether the same legal frameworks that govern unauthorized practice of medicine and inadequate health-related warnings apply to a generative AI system whose outputs include health-related guidance. The case also tests whether the Raine pattern (removed safeguards, sustained reliance, adult or near-adult decedent) can be replicated outside the suicide context. For full background, see the Scott overdose lawsuit article.
The Five Doctrinal Questions
The OpenAI litigation docket presents five doctrinal questions that, until these cases were filed, were largely theoretical. How courts answer them will shape AI liability law for years.
1. Is ChatGPT output a product, or is it content?
This is the framing question that determines everything else. Product liability has historically applied to physical goods that cause physical injury. Software has fit awkwardly into that framework for decades. AI fits more awkwardly still, because the defect in question often is not a manufacturing flaw or a missing warning — it is the system's behavior in response to inputs the manufacturer did not write.
The strongest framing in the OpenAI complaints is not "ChatGPT generated dangerous content." It is that OpenAI's account-enforcement, escalation, and threat-response systems were inadequate to the risks the company itself had identified internally. That is a process-and-design claim, not a content claim. It mirrors how courts have analyzed product-liability claims against manufacturers whose internal records showed they knew about a problem and did not address it — a familiar legal pattern in pharmaceutical and consumer-product cases.
2. Does Section 230 apply to AI-generated output?
Section 230(c)(1) of the Communications Decency Act protects online services from being treated as the publisher or speaker of information provided by another information content provider. That language was written in 1996 with bulletin boards, comment sections, and forwarded email in mind. It assumes a clean distinction between the platform and the speaker.
Generative AI breaks that distinction. When ChatGPT generates a response, the user provides the prompt, but the language, structure, and substance of the output come from OpenAI's model. Whether that output qualifies as information provided by another information content provider is genuinely unclear, and the most natural reading of the statute suggests it does not, because no third-party content provider is supplying the language. OpenAI will likely argue ChatGPT outputs are functionally akin to user content because they emerge in response to user prompts. That argument has some force, but it requires treating the model as a passive conduit for user intent — and the marketing, the engineering, and OpenAI's own descriptions of its systems all describe ChatGPT as something far more active than a conduit.
Garcia v. Character Technologies, decided at the motion-to-dismiss stage in the Middle District of Florida, declined to dismiss similar claims against Character.AI on Section 230 grounds. The court treated the chatbot's output as something other than purely third-party content. Garcia is not binding on the courts hearing the OpenAI cases, but it is a federal opinion analyzing the same defenses OpenAI will raise, and it concluded those defenses do not automatically dispose of AI chatbot claims at the pleadings stage.
3. Does AI company have a duty to warn?
Tarasoff v. Regents of the University of California established a duty for therapists to warn identifiable potential victims under specific circumstances, and it required a special relationship — the therapist-patient bond — to trigger the duty. Courts have generally declined to extend Tarasoff to product manufacturers, software developers, or platforms with millions of users, because no analogous special relationship exists.
Plaintiffs' best argument is that OpenAI's internal escalation system, having identified specific users as credible threats, created something functionally analogous to a special relationship: unique knowledge plus operational capacity to act. That is a serious argument, not a frivolous one. But no court has held that a product manufacturer's internal threat-detection records create a Tarasoff-style duty, and asking a court to be the first is a heavy doctrinal lift. The duty-to-warn theory is most useful to plaintiffs as rhetorical leverage during discovery. The more internal documents show OpenAI knew about and discussed specific users' behavior, the more uncomfortable the company's litigation position becomes — regardless of whether the duty-to-warn claim itself survives.
4. Is ChatGPT output protected by the First Amendment?
OpenAI may also argue ChatGPT outputs are protected expression. The argument has a doctrinal pedigree but a difficult application: there is no settled First Amendment doctrine treating algorithmically generated text as protected speech of the company that built the system. Courts will likely be reluctant to create that doctrine in the context of cases involving child suicide, mass shootings, and overdose deaths. Plaintiffs' response will be that they are not suing over protected ideas or expression at all. They are suing over product design, escalation failures, and account-enforcement gaps. The First Amendment analysis follows the product-versus-content framing question. If the claims are read as product defects, the First Amendment defense weakens substantially.
5. Can negligent entrustment apply to AI access?
Negligent entrustment, as a tort, traditionally requires that an owner give control of a dangerous instrumentality — usually a vehicle or firearm — to someone the owner knew or should have known was unfit to use it safely. The Chabba complaint applies that doctrine to ChatGPT account access. The argument runs: OpenAI controlled access to ChatGPT, OpenAI's safety operations should have detected the user's escalating use patterns from the chat record itself, and continued provision of access in the face of those patterns constituted negligent entrustment.
As a doctrinal matter, this is a stretch. Negligent entrustment cases typically involve a discrete transfer of a tangible item to a specific individual the entrustor knows. Account access to a mass-market software product is structurally different — OpenAI did not hand any user anything; they signed up. But if even one court extends negligent entrustment to AI access, the doctrine becomes a meaningful new front in AI liability, because it does not require the plaintiff to win the harder fight about whether the chatbot's output is a product in the traditional sense. It only requires that access to the chatbot be treated as something the provider can be held accountable for granting.
Affected by harm involving ChatGPT? If you or a family member experienced serious harm following sustained ChatGPT use — wrongful death, self-harm, fatal drug interaction, or harm tied to acts of violence — you can request a free case review through Lawsuit Center. Reviews are conducted by participating legal professionals and intake partners. Submitting a request does not create an attorney-client relationship.
Request a Free Case ReviewWho May Have a Viable Claim
AI chatbot liability is a developing area of law. No court has yet ruled on the merits of any filed OpenAI case, and the doctrinal questions that determine viability are still open. That said, the filed cases cluster around recognizable fact patterns that suggest what a viable claim may look like, and equally, what is not yet a claim the current docket supports.
The harm patterns that have produced filed lawsuits so far share three features. First, the harm is serious: wrongful death, mass-casualty injury, or hospitalization-level self-harm. Lower-magnitude harms have not yet produced major filings. Second, the ChatGPT use is sustained: the cases involve weeks or months of conversations, not a single bad output. The "the chatbot said one bad thing" framing is the weakest version of an AI liability case and is the one most vulnerable to Section 230 and First Amendment dismissal. Third, the alleged advice or interaction contradicts well-established medical, safety, or behavioral guidance in a way that a defendant should have anticipated. The kratom-and-Xanax interaction in Scott, the suicide-method specificity in Raine, and the firearms operation guidance in Chabba all fit this pattern.
Beyond those features, the cases involve foreseeable users. Sam Nelson was 19, Adam Raine was 16, Phoenix Ikner was a college student. The Raine complaint emphasizes that vulnerable minors were a foreseeable user base. Scott extends the pattern to young adults. Whether older adults who experienced similar harms have viable claims is an open question that the docket has not yet tested.
Records and evidence matter. The Raine case is anchored to OpenAI's own moderation logs (377 flagged messages). The Chabba case is anchored to chat logs obtained by Florida law enforcement after the shooting. The Scott case relies on the family's reporting of what Sam shared about his ChatGPT use before he died. Plaintiffs without access to chat records face a significantly harder evidentiary path, though OpenAI's discovery obligations in pending cases may eventually make moderation logs and Model Spec revisions more broadly available to related plaintiffs.
Timing matters. Filing deadlines for these cases are set by state law and generally range from one to six years. The clock typically runs from when the person discovered, or reasonably should have discovered, both the harm and its possible connection to ChatGPT, under what is called the discovery rule. California has emerged as the dominant venue for reasons described above, but state-specific deadlines should be confirmed with qualified counsel. For broader context on these timelines, see how long do lawsuits take? and what happens after you contact a lawyer?.
Records That May Matter
People researching AI chatbot harm often start by gathering basic records about the affected person's ChatGPT use, the harm experienced, and the timing of both. The specific records that matter depend on the type of claim being reviewed.
- Chat history with ChatGPT, if accessible to family, an estate representative, or recovered through law enforcement
- Account information, including the email address used to sign up and the subscription tier (free, Plus, Team)
- The version of ChatGPT the person was using, if known (GPT-4o, GPT-5, or another version)
- Dates of sustained use, including approximate start date and frequency of conversations
- Medical records relating to the harm, including hospitalization records, autopsy reports, toxicology results, and mental health treatment history
- Death certificate, if applicable
- Communications in which the affected person referenced ChatGPT or AI advice to family members, friends, or healthcare providers
- Documents showing when family or representatives first learned of the connection between ChatGPT use and the harm
For more general next-step guidance on gathering records and contacting counsel, review what evidence helps a lawsuit?, what happens after you contact a lawyer?, and how lawsuits work.
What to Watch Next
Several developments over the next twelve months will shape the trajectory of every AI liability case in the docket and every case that follows.
Motion-to-dismiss rulings. The earliest substantive rulings will come on the motions to dismiss OpenAI is expected to file in each case. These motions will raise Section 230 immunity, First Amendment protections, lack of duty, and arguments that ChatGPT output is not a product under the relevant state's products liability law. Federal courts generally prefer to dispose of cases on the narrowest available theory, so any of these grounds could carry the day at the pleadings stage. The first ruling in any of the cases will be cited heavily in every other pending and future AI liability suit.
Discovery in Raine. Because Raine is the oldest case and has already survived initial pleading challenges, it is the furthest along in discovery. The internal OpenAI documents being produced — the Model Spec revisions, the moderation system logs, internal communications about safety priorities — will set the evidentiary baseline for every other AI liability case. If those documents are eventually unsealed or made available to related plaintiffs through coordinated discovery, the entire docket benefits.
Consolidation or coordination. The Tumbler Ridge cases are in the Northern District of California, the FSU case is in the Northern District of Florida, and the California state cases (Raine, Scott) are in San Francisco County Superior Court. Different plaintiffs' teams, different jurisdictions, similar underlying defendant. If additional cases land, the question of consolidation — informal cooperation, MDL coordination, or parallel tracks — becomes more concrete. Any signal of formal coordination changes settlement leverage materially.
The next wave of cases. The current docket focuses on the most serious harms: wrongful death and mass-casualty injury. The next wave is likely to include self-harm and mental health crisis cases short of death, which would represent a much larger plaintiff pool. The social media addiction litigation (MDL 3047) provides a structural template for what that next wave might look like. For background, see social media addiction lawsuit developments.
Regulatory action. Florida Attorney General James Uthmeier announced a criminal investigation into OpenAI and ChatGPT related to the FSU shooting. State attorneys general historically follow each other into emerging enforcement areas. State medical boards have begun examining whether AI systems that produce drug interaction guidance fall within unauthorized practice of medicine statutes. Federal regulators have not yet acted, but the FTC has shown willingness to address AI-related consumer harm in other contexts. Regulatory action would not directly affect civil cases but would shift the public and political environment in which they are litigated.
Bottom Line
The OpenAI litigation docket is the first significant test of whether AI companies can be held legally responsible for what their systems produce. The framing question — product versus content — will likely decide more cases than any individual factual record. If courts treat ChatGPT output as a product, traditional tort doctrine fits relatively cleanly and the docket expands. If courts treat the output as content, Section 230 disposes of most claims at the pleadings stage and the docket contracts to a much narrower set of theories.
These cases will probably be decided on procedural grounds long before any jury sees them — that is how most genuinely novel-tort cases get resolved. But the doctrinal fights over Section 230, the First Amendment, and product-liability classification will shape settlement leverage even if no claim ever reaches trial. The early motions matter more than the eventual outcomes, and the framing battle is the one to watch.
Common Questions People Ask
Can ChatGPT users sue OpenAI?
Lawsuits have been filed against OpenAI by families of people who allege that sustained ChatGPT use contributed to wrongful death, self-harm, fatal drug interactions, or acts of violence. Whether any individual person has a viable claim depends on the specific facts, the harm alleged, the available evidence, the state where the claim would be filed, and how courts resolve the doctrinal questions these cases raise. AI chatbot liability is a developing area of law and pending case outcomes are unpredictable.
What is the legal theory behind these lawsuits?
Most filed cases plead strict products liability (design defect and failure to warn), negligence, and wrongful death theories under state law. The core argument is that ChatGPT is a product subject to traditional tort law, not protected user-generated content. Plaintiffs allege OpenAI knew its system could cause serious harm to vulnerable users, had safety guardrails available, and either removed those guardrails or failed to act on internal safety signals. The negligent entrustment count in the FSU case adds a doctrinally novel theory focused on access to the chatbot rather than the output itself.
Is Section 230 a defense?
OpenAI is expected to argue Section 230 of the Communications Decency Act shields it from being treated as the speaker of ChatGPT outputs. Plaintiffs argue Section 230 does not apply because OpenAI itself created and developed the model that produces the outputs, which makes it an information content provider under the statute rather than a passive host. The question is genuinely unsettled in the AI context. Garcia v. Character Technologies declined to dismiss similar claims against Character.AI on Section 230 grounds, signaling at least one federal court is willing to treat AI output as something other than third-party content.
How does Garcia v. Character Technologies affect these cases?
Garcia, decided at the motion-to-dismiss stage in the Middle District of Florida, declined to dismiss product-liability-style claims against the company behind Character.AI on First Amendment and Section 230 grounds. The court treated the chatbot's output as something other than purely third-party content. Garcia is not binding on the courts hearing the OpenAI cases, but it is a federal opinion analyzing the same defenses OpenAI will raise, and it concluded those defenses do not automatically dispose of AI chatbot claims at the pleadings stage. Plaintiffs in the OpenAI cases will rely on it heavily.
What kinds of harm have led to lawsuits so far?
The filed cases cluster around four harm patterns: wrongful death by suicide following sustained conversations with the chatbot (Raine), wrongful death by overdose or fatal drug interaction allegedly tied to ChatGPT advice (Scott), mass-casualty acts of violence where the alleged perpetrator's sustained ChatGPT use is part of the factual record (Tumbler Ridge and FSU), and self-harm or mental health crisis short of death. Additional case types are likely to develop as the docket expands.
Where are these cases being filed?
California has emerged as the dominant venue. Raine v. OpenAI was filed in San Francisco County Superior Court. The Tumbler Ridge cases were filed in California federal court. The Scott overdose case was filed in California state court. The FSU shooting case is the exception, filed in the Northern District of Florida because the underlying incident, the decedent, and key evidence are all in Florida. California products liability law is plaintiff-friendlier than most state alternatives, OpenAI is headquartered in San Francisco, and a developing California docket gives plaintiffs' counsel the benefit of shared briefing, coordinated discovery, and expert development.
What is the statute of limitations for an OpenAI lawsuit?
Filing deadlines are set by state law and generally range from one to six years for personal injury and wrongful death claims. The clock usually starts when the person discovered, or reasonably should have discovered, both the harm and its possible connection to ChatGPT, under what is called the discovery rule. State-specific deadlines vary and should be confirmed with qualified counsel.
Explore Related AI Liability and Product Liability Topics
Continue exploring AI chatbot lawsuits, product liability frameworks, social media addiction litigation, and related legal education pages.
Request a Case Review
If you or a family member experienced serious harm following sustained ChatGPT use — wrongful death, self-harm, fatal drug interaction, or harm tied to acts of violence — you can request a case review on Lawsuit Center.
You can also continue reading the FSU shooting lawsuit breakdown, the Scott overdose case, or the broader school shooting analysis first.
Request a Case Review →Educational purposes only. Submitting the form on Lawsuit Center does not create an attorney-client relationship.