Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 hours ago

    Exactly, one of the ways. And it’s a bandaid that doesn’t work very well. Because it’s probabalistic word association without direct association to intention, variance, or concrete prompts.

    • spongebue@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 hours ago

      And that’s kind of my point… If these things are so smart that they’ll take over the world, but they can’t limit themselves to certain terms of service, are they really all they’re cracked up to be for their intended use?

      • JcbAzPx@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        They’re not really smart in any traditional sense. They’re just really good at putting together characters that seem intelligent to people.

        It’s a bit like those horses that could do math. All they were really doing is watching their trainer for a cue to stop stamping their hoof. Except the AI’s trainer is trillions of lines of text and an astonishing amount of statistical calculations.

        • spongebue@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          You don’t need to tell me what AI can’t do when I’m facetiously drawing attention to something that AI can’t do.