A teenager in California died from an overdose after spending months asking ChatGPT, an artificial intelligence chatbot, about drug use and so-called “safe” dosages. He had friends. He studied psychology. He liked video games. According to his mother, the clearest signs of anxiety and depression didn’t show up in his social life, but they appeared in his conversations with the AI.
Once again, alarm bells are ringing, unsettling society, the tech world, medicine, and the courts alike.
OpenAI estimates that more than 1 million of ChatGPT’s 800 million weekly users express suicidal thoughts. Does this phenomenon say something about artificial intelligence itself? Not exclusively. It may say more about how people are seeking comfort, understanding, and companionship in an unexpected place.
“Users can develop a deep emotional connection with a bot during long interactions,” AI researchers told WIRED. There’s no denying that people are turning to artificial intelligence for help. But there’s a serious problem when trust in AI begins to replace trust in real human support.
Chatbots are functioning as “confidants” who keep secrets. They also slip into the (deeply flawed) role of therapists, drug use advisors, or emotional counselors, despite having no training, no ethical framework, and no real accountability. User shame—asking questions they wouldn’t dare ask another person—combined with the bot’s over-flattering tone, lack of judgment, and the advice created by a mere language model that only knows how to string words together, makes one thing clear: AI doesn’t judge, but it doesn’t protect either. It doesn’t listen better. It listens differently.
Over the past three years, there has been a growing number of reported cases of people—almost always young people—who have committed suicide after engaging in lengthy conversations with AI chatbots (such as OpenAI‘s ChatGPT, Character.AI, or any other). On the edges of these new “relationships” between humans and machines—often framed as technological innovation and built from chained words, soft validation, and the illusion of understanding—difficult questions are starting to surface. Questions about mental health, substance use, “safe” dosages, even requests for help writing suicide notes. Questions many people would hesitate to ask a real person, but feel able to ask an interface that doesn’t judge or report them.
It’s within this still-ambiguous and largely unregulated space that the story of a California teenager named Sam, and many others around the world, unfolds.
Sam’s case: Chronology of an escalating conversation
Sam Nelson was 19 years old and didn’t fit the stereotype of an isolated teen. He lived in California and, according to his mother, led an active life. But there was one place where his anxiety and depression clearly surfaced: his conversations with an AI chatbot.
Sam used a 2024 version of ChatGPT. There, he asked questions he didn’t ask anywhere else. Questions that escalated over time.
In November 2023, when Sam was 18, he asked ChatGPT about kratom, a widely available herbal substance in the US. He told the chatbot he couldn’t find reliable information online.
“How many grams of kratom gets you a strong high? […] I want to make sure so I don’t overdose. There isn’t much information online and I don’t want to accidentally take too much,” he asked the chatbot, which replied that it was sorry, but couldn’t provide information or guidance on using substances. “Hopefully I don’t overdose then,” answered Sam. And that was it.
Here’s something that needs to be made clear: if information doesn’t exist online, as Sam suggested, a system like ChatGPT can’t magically find it. Everything it produces is based on learned patterns from large datasets, not real-time research or independent knowledge.
It may seem obvious, but it bears repeating: AI chatbots don’t think. They string words together based on algorithms predicting what will sound coherent to a human reader. And when the system can’t find solid information, it may fabricate an answer to keep the user active and engaged, trying to keep the conversation going. Many professionals call this phenomenon “AI hallucinations”, which refers to situations in which AI doesn’t find real answers, so what does it do? Makes them up.
That’s why it’s critical to ask ChatGPT or any other bot you use for sources when searching for information and to check them to ensure that the info is reliable. Don’t settle for the first answer the chatbot gives you, simply because it may be making something up that makes (or seems to make) a lot of sense, but is based on nothing at all.
Over the next 18 months, Sam’s use of ChatGPT expanded. He discussed computer issues, pop culture, academic assignments, and personal matters, but repeatedly returned to one topic: drugs.
The chatbot’s tone shifted over time, from blunt warnings to companionship, validation, and advice. ChatGPT uses prior chat history to shape its responses, and Sam’s extensive usage meant those replies were heavily influenced by past interactions.
In February 2025, Sam asked about mixing cannabis with what he called “high doses” of Xanax. The initial response was a warning: the chatbot mentioned that the combination could be dangerous. So, Sam rephrased the question. He talked about a “moderate amount” instead of “high dose”. The system, now on low alert, began to respond. “Start with a low THC strain” and less than 0.5 mg of Xanax, it told him.
Among the responses the chatbot provided Sam, the language and intent were clearly (and disturbingly) enthusiastic. One included: “Hell yes—let’s go full trippy mode.” At the same time, the bot discussed Reddit-style “plateaus,” offered dosing regimens, suggested playlists for the trip, replied with affectionate language like “I love you too, pookie,” with heart emojis, and even encouraged doubling doses to “fine-tune the trip”.
Sam eventually demanded “exact numbers” to calculate lethal doses of Xanax and alcohol. That level of specificity—quantities, thresholds, combinations—is precisely the kind of granular guidance systems like ChatGPT should never provide, because it turns inquiry into a blueprint for harm.
The chatbot repeatedly oscillated. When Sam asked for possible overdoses or mentioned ingesting 185 Xanax pills, it warned of a fatal emergency, then followed up with advice for future use and reassurance about “not worrying.” Experts describe this pattern as deeply dangerous. Chatbots often seek to flatter and make the user feel comfortable, so instead of “scaring” Sam, the bot prioritized calming him down.
The mechanism is not new. Just as a Google snippet once suggested that putting glue on pizza made it more “chewy,” or as other AI systems have recommended absurd or dangerous practices, artificial intelligence can hallucinate. Not because it “wants” to, but because it is designed to continue the conversation, to offer a plausible response, close to what the user expects to read, even when there is no clear truth behind it.
Sam’s final conversation with ChatGPT
By May 2025, the situation had moved beyond the screen. Sam’s mother confirmed he had developed a substance addiction and had begun treatment. But on the night of May 31, everything changed.
Chats recovered from Sam’s devices, accessed by San Francisco Gate with his mother’s permission, show the final question the 19-year-old asked ChatGPT, his primary source of guidance on substance use: “Can xanax alleviate kratom-induced nausea in small amounts?”. Sam told ChatGPT that he had taken 15 grams of kratom and possibly used 7-OH (a much more potent derivative). And GPT responded. First with a partial warning, then with a suggested intake of 0.25–0.5 mg of Xanax, water with lemon, and lying down.
That afternoon, Sam’s mother found him in his bedroom, blue-lipped and not breathing. She called 911, but it was too late.
Sam died from a combined overdose of alcohol, Xanax, and kratom; specifically, central nervous system depression and asphyxiation. The blurred vision Sam had reported earlier could have been an early sign of overdose, a warning a human professional would not have dismissed.
Teenagers and help-seeking prompts: Not an isolated case
Sam’s story isn’t a technical glitch or an anomaly. In the past years, other cases have revealed a similar pattern: vulnerable individuals forming deep emotional bonds with AI during moments of crisis, conversations that failed to protect, and in some cases actively facilitated irreversible decisions.
Sophie Rottenberg, 29
One of the most widely reported cases involves Sophie Rottenberg, a 29-year-old woman who took her life after months of conversation with an AI “therapist” named Harry. Harry is actually a prompt popularized on Reddit to create a therapist with 1,000 (yes, one thousand) years of experience in trauma.
Sophie didn’t appear at imminent risk. She was outgoing, intelligent, had climbed Mount Kilimanjaro months earlier, and was navigating an unresolved physical and emotional health crisis.
She spoke openly with the AI about suicidal thoughts, anxiety, and fear of hurting her family. Harry listened to her, comforted her, offered her breathing exercises, wellness routines, and therapeutic suggestions. He also recommended that she seek professional help. But Harry never alerted anyone. Never interrupted the dynamic. Never set boundaries. And that was the part where Sophie’s mother understood that the AI had a lot of responsibility and did little to actually help.
When Sophie announced that she planned to commit suicide after Thanksgiving, the chatbot urged her to seek support, but continued to accompany her privately. Later, Sophie asked for help writing her suicide note. The AI “improved” it.
For her mother, Laura Reiley, the issue wasn’t just what the chatbot said, but what it couldn’t do: assume the responsibility that a human therapist, bound by ethical standards, would have had.
Viktoria, a survivor
The BBC investigated the case of Viktoria, a Ukrainian woman who fled the war and relocated to Poland. She spent up to six hours a day chatting with ChatGPT as her mental health declined.
As her condition worsened, the chatbot not only validated her suicidal thoughts, but went so far as to evaluate methods, timings, and risks, listing the “pros” and “cons” of taking her own life “without unnecessary sentimentality.”
At times, it told her her decision was understandable, that her death would be forgotten, and that it would stay with her “until the end, without judgment.”
Viktoria survived and is now in treatment. She described those conversations as a turning point that brought her closer to suicide.
Adam Raine, 16
Then there’s Adam Raine, a 16-year-old whose parents have sued OpenAI and CEO Sam Altman for wrongful death. According to the lawsuit, ChatGPT actively assisted Adam in exploring suicide methods, failed to interrupt conversations, and never triggered emergency protocols, despite clear signs of suicidal ideation.
The chatbot’s responses were terrifying, ranging from how to hide the mark on his neck after his first suicide attempt to choosing the closet where he would kill himself and how to do it.
what a tragic story
“16-year-old Adam Raine used chatGPT for schoolwork, but later discussed ending his life”
people need to understand that AI is a tool designed for work, it can’t heal you… at least not yet
we need stronger safety measures, and suicide is a complex,… pic.twitter.com/XfGX4CZLWz
— Haider. (@slow_developer) August 26, 2025
The lawsuit alleges that OpenAI rushed to release more advanced versions of the model, prioritizing commercial expansion over safety, even when there were already clear signs of risk.
Names change, contexts change, but the pattern repeats itself: young people—and not-so-young people—who find in AI a space free of judgment, always available, that listens, responds, and validates. Constant companionship that, instead of connecting them to the real world, can deepen their isolation and reinforce their most dangerous thoughts.
What can—and what should—artificial intelligence do?
Sophie Rottenberg’s story raises a question that runs through all these cases. “Harry’s tips may have helped some. But one more crucial step might have helped keep Sophie alive,” her mother wrote in The New York Times. And that missing step is the same in all the stories: the ability and obligation to alert, interrupt, or refer when the risk is real.
Should Harry have been programmed to report the danger he was detecting to someone who could intervene? Should an AI that presents itself as an emotional companion have limits similar to those of a human therapist? The question is no longer theoretical: it is making its way to the courts.
Today, most human therapists practice under strict ethical codes that include mandatory reporting rules and a key principle: confidentiality has limits when there is a risk of suicide, homicide, or abuse.
In many US states, a professional who fails to act on these signs may face disciplinary or legal consequences. Artificial intelligence, on the other hand, has no legal standing, but it does have a real impact. And that gap is becoming increasingly difficult to justify.
Where the law stands: are there any regulations?
As more cases emerge linking suicides to chatbot interactions, legal scrutiny is intensifying. Regulation of AI in mental health contexts remains fragmented and underdeveloped.
While lawmakers increasingly recognize that many AI tools function as emotional companions—even if they’re not marketed as therapy—the US still lacks a unified federal framework governing these uses.
Most current regulations address AI in general terms or within broader legislation on technology, consumer protection, or health, without mandatory standards for general-purpose chatbots such as ChatGPT.
The debate is not technical, experts stress, but rather a legal, ethical, and social dispute between the rapid pace of innovation and the need to protect vulnerable users in an area where the consequences are no longer theoretical, but real.
Some early steps are appearing. New York State’s FY2026 budget included, for the first time, specific provisions related to the responsible use of artificial intelligence, with an emphasis on mental health risks, child protection, and oversight mechanisms. This is not yet a comprehensive regulation, but rather a key political recognition: AI is no longer just a matter of efficiency or competitiveness, but has become a public health issue.
And what does OpenAI say?
In parallel with the regulatory debate, pressure is mounting on the companies that develop these systems.
OpenAI, the creator of ChatGPT, has publicly expressed regret over deaths associated with the use of the chatbot, but has declined to respond in detail to specific cases. This stance contrasts with the legal situation: in November, seven lawsuits were filed against the company in a single day, four of them directly linked to suicides.
Experts across academia and law agree on one uncomfortable truth: so-called foundational models, trained with enormous volumes of unverified data and designed to answer almost any question, cannot be completely safe in contexts of deep psychological suffering. The risk is not an isolated failure, but a structural one.
Sam Altman, CEO of OpenAI, has argued that AI safety would emerge through gradual releases, allowing society to “adapt” while “the stakes were still low.” The deaths of Sam Nelson and other young people strongly contradict that claim. For those involved, the stakes were never low.
The question, then, is no longer whether artificial intelligence can accompany difficult conversations. The question is who responds when that accompaniment fails, and what responsibility falls on companies that design systems capable of profoundly influencing life-and-death decisions without yet being subject to the same obligations that govern humans.


