• Enkrod@feddit.org
    link
    fedilink
    arrow-up
    53
    ·
    edit-2
    18 hours ago

    It’s a tool! I want it to answer questions, this was the best possible answer, because what I do not want is LLMs (which do NOT (and I can’t stress this enough), which do NOT understand human psychology) to psychoanalyze every fucking question.

    • Sibilantjoe@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      16 hours ago

      I’m glad people are getting this. When the public screams and cries over ‘unsafe’ AI responses like (supposedly) this one, you end up with useless AI models, like the older versions of Claude that would give you nonsense like ‘sorry, I can’t tell you whether one hundred dollars is more than one hundred yuan, because that could reinforce harmful beliefs about national blah blah blah

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        3
        ·
        14 hours ago

        Honestly I think self-hosted OSS for those models may be the only way to get genuinely useful results in the long term for several different subjects, since I’d imagine investors and advertisers would be unwilling to throw capital at an unrestricted platform.