• merc@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    Just don’t think that turning off the sycophancy improves the quality of the responses. It’s still just responding to your questions with essentially “what would a plausible answer to this question look like?”

    • WanderingThoughts@europe.pub
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 hour ago

      You can set default instructions to always be factual, always provide a link to prove its answer and to give an overall reliability score and tell why it came to that score. That stops it from making stuff up, and allows you to quickly verify. It’s not perfect but so much better than just trusting what it puts on the screen.

      • merc@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 hour ago

        That stops it from making stuff up

        No it doesn’t. That’s simply not how LLMs work. They’re “making stuff up” 100% of the time. If the training data is good, the stuff they’re making up more or less matches the training data. If the training data isn’t good, they’ll make up stuff that sounds plausible.

        • WanderingThoughts@europe.pub
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          47 minutes ago

          If you ask it for sources/links, it’ll search the web and get information from the pages these days instead of only using training data. That doesn’t work for everything of course. And the biggest risk is that all sites get polluted with slop so the sources become worthless over time.

          • merc@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            46 minutes ago

            Sounds infallible, you should use it to submit cases to courts. I hear they love it when people cite things that AI tells them are factual cases.