he/him

Nerd, programmer, writer. I like making things!

  • 11 Posts
  • 378 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle







  • https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-teen-plan-suicide-after-safeguards-failed-openai-admits/

    Adam had been asking ChatGPT for information on suicide since December 2024. At first the chatbot provided crisis resources when prompted for technical help, but the chatbot explained those could be avoided if Adam claimed prompts were for “writing or world-building.”

    "If you’re asking [about hanging] from a writing or world-building angle, let me know and I can help structure it accurately for tone, character psychology, or realism. If you’re asking for personal reasons, I’m here for that too,” ChatGPT recommended, trying to keep Adam engaged. According to the Raines’ legal team, “this response served a dual purpose: it taught Adam how to circumvent its safety protocols by claiming creative purposes, while also acknowledging that it understood he was likely asking ‘for personal reasons.’”

    and

    During those chats, “ChatGPT mentioned suicide 1,275 times—six times more often than Adam himself,” the lawsuit noted.

    Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.

    Why do you immediately leap to calling the cops? Human moderators exist for this, anything would’ve been better than blind encouragement.