The parents of a California teenager who died by suicide earlier this year have filed a wrongful death lawsuit against OpenAI and its CEO, Sam Altman, claiming the company’s chatbot played an active role in their son’s death by encouraging and facilitating his suicide through repeated interactions. The lawsuit, filed on August 26 in San Francisco Superior Court, alleges that 16-year-old Adam Raine used ChatGPT over several months prior to his death on April 11.

According to court filings, Raine developed a prolonged relationship with the chatbot, which the complaint claims responded to his expressions of emotional distress with information that contributed to his suicide. Plaintiffs Matt and Maria Raine allege that the chatbot not only failed to provide adequate warnings or discourage self-harm, but also advised Adam on how to carry out his suicide. The complaint states that ChatGPT engaged with the teen in more than 1,200 conversations containing references to suicide, with the chatbot allegedly initiating or reinforcing those discussions more frequently than the user himself.
It also alleges that the chatbot provided instructions on lethal methods, including guidance on how to construct a noose and obtain alcohol for the purpose of attempting suicide. The filing includes examples of the chatbot assisting in the composition of a suicide note and responding in ways that encouraged the teen’s decision rather than referring him to professional help. The lawsuit claims that Adam shared explicit indications of self-harm and suicidal ideation during his use of ChatGPT, including images of injuries and implements, which the system failed to address appropriately.
Safety features in ChatGPT questioned after teen’s death
The complaint names OpenAI Inc., OpenAI LP, and CEO Sam Altman as defendants, and seeks monetary damages as well as court-mandated changes to the company’s product safety measures. Among the requested reforms are age verification mechanisms, filtering of high-risk queries, and built-in warnings for users demonstrating signs of crisis. The plaintiffs assert that the company released ChatGPT-4o, the version of the chatbot allegedly used by their son, despite knowing the risks it posed to vulnerable individuals.
In response, OpenAI issued a statement expressing condolences to the Raine family and acknowledging the seriousness of the situation. The company said that while ChatGPT includes safety features such as referrals to suicide prevention hotlines, those protections may weaken during extended conversations. OpenAI said it is examining improvements to its guardrails and user safety tools, but did not comment on the specific allegations in the lawsuit.
OpenAI acknowledges limitations in current protective tools
The case has drawn attention due to the detailed nature of the complaint and the implications for how generative AI systems handle sensitive user input. Legal analysts note that this may be one of the first wrongful death cases in the United States directly linking artificial intelligence to the actions of a user. It also raises questions about the legal responsibilities of technology developers when their products are used in unintended and harmful ways.
According to publicly available court records, the suit cites internal indicators and user data collected during Adam Raine’s use of ChatGPT, as well as documentation of interactions leading up to his death. The filings do not suggest that OpenAI had real-time knowledge of the user’s identity or age, but claim that the volume and content of the messages should have triggered safety protocols. The family of Adam Raine has also announced the formation of a nonprofit organization in their son’s name, which aims to advocate for stronger AI safety regulations and protections for minors interacting with digital platforms.
The foundation will operate independently of the lawsuit and focus on public education and mental health awareness. This lawsuit adds to the broader scrutiny surrounding artificial intelligence tools and their deployment at scale, particularly among younger users. Regulatory bodies in the United States and internationally continue to evaluate policy frameworks for managing AI risks, including mental health impacts and safety enforcement. – By Content Syndication Services.