• ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    3 days ago

    It’s worth noting that humans aren’t immune to the problem either. The real solution will be to have a system that can do reasoning and have a heuristic for figuring out what’s likely a hallucination or not. The reason we’re able to do that is because we interact with the outside world, and we get feedback when our internal model diverges from it that allows us to bring it in sync.

    • msage@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      20 hours ago

      LLMentalist is a mandatory read.

      Stop making LLMs happen, we don’t need energy hungry bullshit generators for anything.

      There are so many more important AIs that need attention and funding to help us with real problems.

      LLMs won’t solve anything.

      • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        19 hours ago

        There is a lot of hype around LLMs, and other forms of AI certainly should be getting more attention, but arguing that this tech no value is simply disingenuous. People really need to stop perseverating over the fact that this tech exists because it’s not going anywhere.

        • msage@programming.dev
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          14 hours ago

          Any benefits are by far outweighted by the cost and dangers.

          Tell me more about the value when every LLM company is hemorrhaging money.

          • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            14 hours ago

            You seem to have a very US centric perspective on this tech the situation in China looks to be quite different. Meanwhile, whether you personally think the benefits are outweighed by whatever dangers you envision, the reality is that you can’t put toothpaste back in the tube at this point. LLMs will continue to be developed. The only question is how that’s going to be done and who will control this tech. I’d much rather see it developed in the open.

            • msage@programming.dev
              link
              fedilink
              arrow-up
              1
              ·
              2 hours ago

              You dense motherfucker.

              No LLMs are being developed in the open.

              Even provided weights mean nothing.

              It’s not knowledge LLMs retain, just the ingressed text.

              LLMs should be skipped after confirming that they are indeed a dead end they always were. And the entire world should focus on anything else.

              • DigitalStefan@fosstodon.org
                link
                fedilink
                arrow-up
                1
                ·
                1 hour ago

                @msage @yogthos I don’t know if I agree 100% with this, but I do like what you’re saying.

                It seems like all the AI companies are simply hoping AGI emerges from it and nobody is doing the actual research to make that happen.

                People were researching it when I was a child and I suspect they’ll still be researching it when I’m collecting my pension.