Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

The earliest look at OpenAI’s strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teen’s “suicide coach.” OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the world’s most engaging chatbot, parents argued.

But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring “the full picture” revealed by the teen’s chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that he’d begun experiencing suicidal ideation at age 11, long before he used the chatbot.

  • CosmoNova@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    14 hours ago

    That‘s a company claiming companies can‘t take responsibility because they are companies and can‘t do wrong. They use this kind of defense virtually every time they get criticized. AI ruined the app for you? Sorry but that‘s progress. We can‘t afford to lag behind. Oh you can’t afford rent and are about to become homeless? Sorry but we are legally required to make our shareholders happy. Oh your son died? He should‘ve read the TOS. Can‘t afford your meds? Sorry but number must go up.

    Companies are legally required to be incompatible with human society long term.