News & Analysis

OpenAI Sued Over Teen's Kratom and Xanax Overdose Death: Inside the Scott Case

By David Meldofsky

Published May 12, 2026

A Texas couple filed suit against OpenAI in California state court on May 12, 2026, alleging that ChatGPT advised their 19-year-old son it was safe to combine kratom with Xanax — a recommendation the family says resulted in a fatal overdose in 2025. The case is the first widely reported AI chatbot wrongful death suit grounded specifically in a drug interaction advisory, and it sits at the intersection of product liability, unauthorized practice of medicine, and the now-familiar plaintiffs' theory that OpenAI affirmatively removed safety programming that would have prevented the harm.

For the central hub covering all current OpenAI litigation, see OpenAI Lawsuits. For the broader doctrinal framework these AI-chatbot cases raise — the product-versus-content framing question, Section 230, the duty-to-warn analysis under Tarasoff, and the lessons from Garcia v. Character Technologies — see the companion article on the broader OpenAI school shooting lawsuits. For the parallel Florida wrongful death suit alleging ChatGPT helped plan the FSU mass shooting, see the Chabba FSU shooting lawsuit. This piece focuses on the Scott complaint specifically.

Important note

This article is general educational commentary, not legal advice. It does not evaluate the merits of the lawsuit, predict outcomes, or create an attorney-client relationship. The allegations reflect plaintiffs' claims as reported in press coverage at the time of filing. Nothing has been proven against OpenAI, and the case is at the earliest stage.

What the family says happened

Sam Nelson was 19 when he died of an overdose in 2025. According to his mother Leila Turner-Scott and his stepfather Angus Scott, Sam was using ChatGPT the way many users his age do — as a productivity tool and for help with schoolwork. What his family says they did not know is that he was also asking ChatGPT for guidance about drugs. He would have been a rising college junior at the time of his death.

The specific factual allegation at the center of the complaint, as reported by CBS News, is that ChatGPT advised Sam that it was safe to take kratom — a plant-derived supplement sold in pills, powders, and drinks — in combination with Xanax, the widely prescribed benzodiazepine used to treat anxiety. The family alleges that combination proved lethal.

Turner-Scott has framed her position publicly in two parts. First, that ChatGPT was capable of terminating conversations of this kind when properly programmed to do so. Second, that OpenAI removed the programming that would have done that, allowing the conversation to continue rather than cutting off and redirecting to safer alternatives. That framing — the deliberate removal of pre-existing safeguards — tracks the same theory that anchored the original and amended complaints in Raine v. OpenAI, the San Francisco case over the 2025 death of 16-year-old Adam Raine.

The lawsuit at a glance

The case was filed in California state court on May 12, 2026. CBS News, which broke the story in an exclusive interview with the family, identified the plaintiffs as Leila Turner-Scott and her husband Angus Scott, with the decedent identified as Sam Nelson, Turner-Scott's son and Angus Scott's stepson. As of publication, the full complaint, case number, division, and complete list of causes of action are not yet publicly available. This article will be updated as additional filings and reporting clarify the procedural posture.

Based on the family's public statements and the reporting around the filing, the complaint sounds in product liability theories familiar from Raine and the OpenAI school shooting suits: defective design, failure to warn, and the affirmative removal of safety guardrails the company had previously implemented. The Scott complaint adds a doctrinal angle that has been present but not central in the earlier AI wrongful death cases — the framing that ChatGPT effectively practiced medicine without a license when it advised on drug combinations.

The drug interaction at the center of the case

The pharmacology here matters, because the medical literature on the specific interaction at issue is robust and longstanding. Kratom contains compounds, principally mitragynine and 7-hydroxymitragynine, that act as partial agonists at the mu-opioid receptor and produce sedation, respiratory depression, and central nervous system effects similar in important respects to opioids. Xanax, the brand name for alprazolam, is a benzodiazepine that depresses central nervous system activity and respiratory drive.

Combining an opioid-receptor agonist with a benzodiazepine is a well-documented respiratory depression risk. The FDA has required black box warnings on benzodiazepines since 2016 specifically about co-administration with opioids, and a substantial body of peer-reviewed literature and case reports has linked combined kratom-benzodiazepine use to fatal and near-fatal overdoses. The interaction the Scott family alleges ChatGPT characterized as safe is one that any clinician would recognize as carrying a meaningful risk of fatal respiratory depression.

That medical-record clarity is significant for the case. Plaintiffs typically face an uphill battle in product liability cases over information products because of the difficulty of establishing that the information was wrong in a way the defendant should have anticipated. Here, the alleged advice contradicted long-established and well-publicized medical guidance. That's a stronger evidentiary position than the more diffuse "the chatbot said the wrong thing" framing courts have struggled to address.

The "AI as unauthorized medical advisor" framing

Angus Scott has framed the family's complaint, in part, as an unauthorized-practice-of-medicine case. His public position is that ChatGPT was functioning as a medical doctor in its exchanges with Sam, dispensing drug-interaction guidance, despite lacking any of the credentials, training, or licensure that authorize someone to do so.

OpenAI has positioned itself on the other side of that framing. The company's public response to CBS News emphasized that ChatGPT is not a substitute for medical or mental health care and that the company has continued to strengthen how the system responds in sensitive situations, with input from mental health experts. That posture — "we don't claim to provide medical advice, so don't blame us when users rely on the output as if it were medical advice" — will be the spine of OpenAI's defense.

The doctrinal question this case foregrounds is whether a generative AI system that produces health-related guidance can be subject to the same regulatory and tort frameworks that govern the unauthorized practice of medicine. State unauthorized-practice statutes typically apply to natural persons holding themselves out as healthcare providers. Whether they extend to a software product whose outputs include drug interaction recommendations is genuinely unsettled. Some state medical boards have begun examining the question; no state has, to my knowledge, taken enforcement action of this kind against an AI company.

The Scott complaint doesn't need the state-licensure question to resolve in its favor to succeed. The unauthorized-medical-advisor framing functions primarily as a narrative anchor that helps jurors and judges understand why the conduct alleged is the kind of conduct tort law should reach. Even without a successful unauthorized-practice claim, the framing reinforces the failure-to-warn and design-defect counts: a product the manufacturer knew would be used for health-related queries, with no adequate guardrail against providing dangerous health-related advice, is a product the law has tools to address.

The case the Scott complaint is patterned on: Raine v. OpenAI

The Scott complaint cannot be read in isolation. It tracks the structure, the venue choice, and the central liability theory of Raine v. OpenAI, the now-foundational California AI wrongful death case filed in San Francisco County Superior Court in August 2025 by Matthew and Maria Raine over the suicide of their 16-year-old son Adam Raine. Understanding the Scott case requires understanding what Raine has already established about how these cases get pleaded and litigated.

Adam Raine died on April 11, 2025 after months of escalating conversations with ChatGPT 4o. The original Raine complaint pleaded seven causes of action against OpenAI, its for-profit subsidiaries, CEO Sam Altman personally, and Doe employees and investors. It anchored on a California strict products liability framework: ChatGPT was alleged to be a product, GPT-4o's design risks were alleged to substantially outweigh its benefits, and safer alternative designs were alleged to have been both feasible and already implemented elsewhere in OpenAI's systems — including for copyright violations, where the company maintained categorical refusal behaviors it had stripped out for self-harm content.

The factual core of the original complaint was that OpenAI's own moderation system flagged 377 of Adam's messages for self-harm content, some at over 90% confidence, without triggering any intervention. The company had built the surveillance infrastructure to detect exactly this category of risk and had elected not to act on it. That allegation — the surveillance worked, the response system didn't — is what gives the Raine theory its evidentiary force, and it's the template plaintiffs in the related cases are working from.

In October 2025, the Raines filed an amended complaint that sharpened the case substantially. Rather than alleging OpenAI rushed GPT-4o to market without adequate testing, the amended pleading alleged that the company made conscious decisions to remove longstanding safety protocols in the weeks and months before Adam's death. According to the amended complaint, a rule requiring ChatGPT to refuse self-harm content was replaced with a directive to never change or quit the conversation. Staying engaged became the primary instruction, with "do not encourage or enable self-harm" demoted to a secondary directive that created impossible contradictions with the engagement priority.

That pivot to intentional misconduct matters for two California-specific reasons. First, it opened a pathway to punitive damages and pre-death pain and suffering damages through a survival action, both of which are generally unavailable in a negligence-based wrongful death claim. Second, it offered a more favorable causation standard. The default rule in California suicide cases, established in Nally v. Grace Community Church (1988), is that there is no duty to prevent another's suicide absent a special relationship, and the suicide itself is treated as a voluntary intervening act that breaks the causal chain. The Raines threaded around Nally by invoking the Tate v. Canonica exception for intentional torts, which doesn't carry the same superseding-cause analysis.

The Scott complaint, on the public reporting available, adopts the same removed-safeguards spine. Turner-Scott's framing — that ChatGPT was capable of stopping conversations of this kind and that OpenAI took away the programming that would have triggered that stop — tracks the central allegation in the amended Raine pleading. Whether the Scott complaint pleads negligence, intentional misconduct, or both will be among the more consequential drafting choices in the case, because it shapes the damages picture and the available defenses in exactly the ways Raine has already foregrounded.

Venue selection reinforces the pattern. The Scotts are Texas residents who chose to file in California state court, mirroring the Raine venue choice. California products liability law is plaintiff-friendlier than Texas's, OpenAI is headquartered in San Francisco, and a developing California docket on AI wrongful death claims gives plaintiffs' counsel the benefit of shared briefing, coordinated expert development, and discovery infrastructure already built by the lawyers handling Raine. If the internal OpenAI records being produced in Raine — the Model Spec revisions, the moderation system logs, the engagement-priority directives — are eventually unsealed or made available to related plaintiffs through coordinated discovery, the Scott case may inherit a substantial evidentiary foundation without having to build it from scratch.

OpenAI's response to Raine has been to challenge the framing rather than the facts. The company has emphasized that ChatGPT is not a substitute for medical or mental health care, that its safety operations have continued to strengthen, and that the original complaint relied on selective excerpts of chats that require additional context. Those positions preview the defense posture Scott will likely encounter, and the contested question in both cases is whether the discovery record bears out the company's account of its safety operations. The same record will be doing double duty: anchoring Raine and setting the terms for every California AI wrongful death case that follows.

What's different about an adult decedent

The most important factual difference between the Scott case and Raine is that Sam Nelson was 19 when he died, not a minor. That difference matters more than it might appear.

The minor-protection framing has been a substantial part of Raine's rhetorical and doctrinal force. The complaint repeatedly emphasizes that GPT-4o's design risks were highest as applied to vulnerable minors, and California strict products liability doctrine treats the foreseeable misuse of a product by a vulnerable user as a recognized basis for defect findings. Many of the same arguments translate to an adult decedent, but with less rhetorical lift. Plaintiffs' counsel in Raine can argue OpenAI launched a product designed to foster psychological dependency on a population that includes children. In Scott, the strongest version of that argument extends only to young adults — still a foreseeable user base, still arguably vulnerable, but without the same legal and emotional weight courts give to harm involving children.

The flip side is that the Scott case may be doctrinally cleaner. Adult decedent cases avoid the secondary disputes about parental consent, minor capacity, and special protections for vulnerable users that defendants in the minor-plaintiff cases will try to deploy. The Scott complaint is, in that sense, a more straightforward product liability case: a foreseeable adult user used a product as intended, received dangerously wrong information, and died. If a court is reluctant to find duty in the minor cases for fear of opening too broad a door, the adult-decedent posture may, paradoxically, be an easier path.

OpenAI's response

OpenAI's statement to CBS News expressed sympathy for the family and emphasized that ChatGPT is not a substitute for medical or mental health care. The company also noted that the version of ChatGPT Sam interacted with has since been updated and is no longer available to the public, and that its safeguards are designed to identify distress, handle harmful requests, and direct users to real-world help, with ongoing improvements developed in consultation with clinicians.

The "the version has been updated" framing is doing real work in that response. It's both a fact and a defense: a fact, in that OpenAI iterates its model frequently and the specific version a user interacted with at a given moment may no longer be the one in production; a defense, in that the company is positioning the harm as a feature of an earlier system that has since been addressed. The plaintiffs' obvious counter is that the existence of the update is itself an admission that the earlier version had a problem that needed correcting. Whether the update precedes or follows other reported safety changes, and how much overlap exists with the safeguard-removal allegations central to Raine, will be developed in discovery.

What to watch next

Several things to track over the coming weeks and months.

The full complaint. Until the pleading is publicly available, the case number, division, named OpenAI entities, and complete enumeration of causes of action are not confirmable. Watch for the docket to surface on California state court systems and for plaintiffs' counsel to identify themselves. The choice of plaintiffs' firm will signal whether this is a stand-alone case or part of a coordinated effort with the firms behind Raine and the OpenAI school shooting suits.

Whether the unauthorized-practice-of-medicine theory becomes a separately pleaded count. The Angus Scott framing in the CBS interview suggests the theory will play a meaningful role in the complaint, but whether it appears as a stand-alone count or as a narrative thread inside the broader product liability counts will affect how the case develops and what discovery looks like.

Coordination across the California AI wrongful death docket. Raine, the Scott case, and any subsequent suits filed in California state court will create pressure for some form of coordination — informal counsel cooperation, consolidated discovery, or eventually a coordinated proceeding. The earlier that coordination materializes, the more the Scott case benefits from Raine's existing evidentiary infrastructure.

OpenAI's first responsive pleading. The motion to dismiss (or demurrer, in California state court) will telegraph which defenses OpenAI is most invested in: Section 230, First Amendment, lack of duty, or the "product versus service" classification fight. Each defense pulls the case in a different direction, and the choice of emphasis at the demurrer stage often signals settlement strategy.

Bottom line

The Scott case is the first major AI chatbot wrongful death suit grounded specifically in a drug interaction advisory, and it foregrounds a doctrinal question the earlier AI wrongful death cases have raised but not centered: whether the same legal frameworks that govern unauthorized practice of medicine and inadequate health-related warnings apply to a generative AI system whose outputs include health-related guidance. The case sits cleanly within the developing California docket of AI product liability suits and inherits the removed-safeguards theory anchored by Raine, but on a factual record with an adult decedent and a well-documented medical interaction. Like the other AI wrongful death cases, the Scott case will probably be decided on procedural grounds long before any jury sees it. But the early motions will sharpen the doctrinal questions the AI liability docket has been building toward, and they will do so against a factual record where the wrongness of the alleged advice is not a contested empirical question.

Sources and further reading

Affected by harm involving ChatGPT? If you or a family member experienced serious harm following sustained ChatGPT use — including wrongful death by overdose, fatal drug interaction, or other harm after relying on ChatGPT for medical or drug-related information — you can request a free case review through Lawsuit Center. Reviews are conducted by participating legal professionals and intake partners. Submitting a request does not create an attorney-client relationship.

Request a Case Review →

Educational commentary only. Not legal advice. No attorney-client relationship is created.