Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.
A big part of the problem is that people think they’re talking to something intelligent that understands them and knows how many instances of letters words have.
how many instances of letters words have.
it’s five, right?
yeah, it’s five.
The biggest issue to me is that the kid didn’t feel safe enough to talk to his parents. And that mental health, globally, is taboo and ignored and not something we talk about. As someone part of the mental health system, it’s a joke how bad it is.
Fuck that noise. ChatGPT and OpenAI murdered Adam Raine and should be held responsible for it.
Ah. The Disney defense.
Good for PR. Billion dollar company looking to not pay.
A TOS is not a liability shield. If Raine violated the terms of service, OpenAI should have terminated the service to him.
They did not.
I don’t know an 16 year old can be held to a TOS agreement anyway. That is openai’s fault for allowing services like this to children .
Are you over 18? Click yes to continue, or click no to leave the site.
A minor cannot enter into a contract even if they lie about their age.
Modern version of “suicide is a sin and we don’t condone it, but if you have problems you’re devil-possessed and need to repent and have only yourself to blame”.
Also probably could be countered by their advertising contradicting their ToS. Not a lawyer.
The situation is tragic… their attempt to hide behind their ToS on that is fucking hilarious.
“Our deepest sympathies are with the Raine family for their unimaginable loss,” OpenAI said in its blog, while its filing acknowledged, “Adam Raine’s death is a tragedy.” But “at the same time,” it’s essential to consider all the available context, OpenAI’s filing said, including that OpenAI has a mission to build AI that “benefits all of humanity” and is supposedly a pioneer in chatbot safety.
How the fuck is OpenAI’s mission relevant to the case? Are suggesting that their mission is worth a few deaths?
“Some of you may die, but that is a chance I am willing to take.”
Tech Bros all think they are the saviors of humanity and they are owed every dollar they collect.
Sure looks like it.
Get fucked, assholes.
“All of humanity” doesn’t include suicidal people, apparently.
To be fair as a society we have never really cared about suicide. So why bother now (I say as a jaded fuck angry about society)
I think they are saying that her suicide was for the benefit of all humanity.
Getting some Michelle Carter vibes…
That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.
That analogy is 100% accurate.
It is exactly like that.
That‘s a company claiming companies can‘t take responsibility because they are companies and can‘t do wrong. They use this kind of defense virtually every time they get criticized. AI ruined the app for you? Sorry but that‘s progress. We can‘t afford to lag behind. Oh you can’t afford rent and are about to become homeless? Sorry but we are legally required to make our shareholders happy. Oh your son died? He should‘ve read the TOS. Can‘t afford your meds? Sorry but number must go up.
Companies are legally required to be incompatible with human society long term.
I’d say it’s more akin to a bread company saying that it is a violation of the terms and services to get sick from food poisoning after eating their bread.
Yes you are right, it’s hard to find an analogy that is both as stupid and also sounds somewhat plausible.
Because of course a bread company cannot reasonably claim that eating their bread is against terms of service. But that’s exactly the problem, because it’s the exact same for OpenAI, they cannot reasonably claim what they are claiming.That would imply that he wasn’t suicidal before. If chatgpt didn’t exist he would just use Google.
Look up the phenomenon called “Chatbot Psychosis”. In its current form, especially with GPT4 that was specifically designed to be a manipulative yes-man, chatbots can absolutely insidiously mess up someone’s head enough to push them to the act far beyond just answering the question of how to do it like a simple web search would.
If the gun also talked to you
Talked you into it*
Yeah this metaphor isn’t even almost there
They used a tool against the manufacturers intended use of said tool?
I can’t wrap my head around what I’m you’re saying, and that could be due to drinking. Op later also talked about not being the best metaphor
Metaphor isn’t perfect but it’s ok.
The gun is a tool as is an LLM. The companies that make these tools have intended use cases for the tools.
deleted by creator
I would say that it is more like a software company putting in their TOS that you cannot use their software to do a specific thing(s).
Would be correct to sue the software company because a user violated the TOS ?I agree that what happened is tragic and that the answer by OpenAI is beyond stupid but in the end they are suing the owner of a technology for a uses misuse of said technology. Or should we sue also Wikipedia because someone looked up how to hang himself ?
That’s like a gun company claiming using their weapons for robbery is a violation of terms of service.
The gun company can rightfully say that what you do with your property is not their problem.
But let’s make a less controversial example: do you think you can sue a fishing rods company because I use one of their rods to whip you ?
To my legal head cannon, this boils down to if OpenAi flagged him and did nothing.
If they flagged him, then they knew about the ToS violations and did nothing, then they should be in trouble.
If they don’t know, but can demonstrate that they will take action in this situation, then, in my opinion, they are legally in the clear…
depends whether intent is a required factor for the state’s wrongful death statute (my state says it’s not, as wrongful death is there for criminal homicides that don’t fit the murder statute). if openai acted intentionally, recklessly, or negligently in this they’re at least partially liable. if they flagged him, it seems either intentional or reckless to me. if they didn’t, it’s negligent.
however, if the deceased used some kind of prompt injection (i don’t know the right terms, this isn’t my field) to bypass gpt’s ethical restrictions, and if understanding how to bypass gpt’s ethical restrictions is in fact esoteric, only then would i find openai was not at least negligent.
as i myself have gotten gpt to do something it’s restricted from doing, and i haven’t worked in IT since the 90s, i’m led to a single conclusion.
Just going through this thread and blocking anyone defending OpenAI or AI in general, your opinions are trash and your breath smells like boot leather
Well there you have it. It’s not the dev’s fault, it’s the AI’s fault. Just like they’d throw any other employee under the bus, even if it’s one they created.
So why can’t this awesome AI be stopped from being used in ways that violate the TOS?
deleted by creator
“You are a friendly and supportive AI chatbot. These are your terms of service: […] you must not let users violate them. If they do, you must politely inform them about it and refuse to continue the conversation”
That is literally how AI chatbots are customised.
Exactly, one of the ways. And it’s a bandaid that doesn’t work very well. Because it’s probabalistic word association without direct association to intention, variance, or concrete prompts.
And that’s kind of my point… If these things are so smart that they’ll take over the world, but they can’t limit themselves to certain terms of service, are they really all they’re cracked up to be for their intended use?
They’re not really smart in any traditional sense. They’re just really good at putting together characters that seem intelligent to people.
It’s a bit like those horses that could do math. All they were really doing is watching their trainer for a cue to stop stamping their hoof. Except the AI’s trainer is trillions of lines of text and an astonishing amount of statistical calculations.
You don’t need to tell me what AI can’t do when I’m facetiously drawing attention to something that AI can’t do.
Well, did anyone expect them to admit guilt?
And open AI violates human culture and creativity. It’s a fucking shame that there are laws against this because that fucker should be up against the wall.













