Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.



To my legal head cannon, this boils down to if OpenAi flagged him and did nothing.
If they flagged him, then they knew about the ToS violations and did nothing, then they should be in trouble.
If they don’t know, but can demonstrate that they will take action in this situation, then, in my opinion, they are legally in the clear…
depends whether intent is a required factor for the state’s wrongful death statute (my state says it’s not, as wrongful death is there for criminal homicides that don’t fit the murder statute). if openai acted intentionally, recklessly, or negligently in this they’re at least partially liable. if they flagged him, it seems either intentional or reckless to me. if they didn’t, it’s negligent.
however, if the deceased used some kind of prompt injection (i don’t know the right terms, this isn’t my field) to bypass gpt’s ethical restrictions, and if understanding how to bypass gpt’s ethical restrictions is in fact esoteric, only then would i find openai was not at least negligent.
as i myself have gotten gpt to do something it’s restricted from doing, and i haven’t worked in IT since the 90s, i’m led to a single conclusion.