Evaluating 35 open-weight models across three context lengths (32K, 128K, 200K), four temperatures, and three hardware platforms—consuming 172 billion tokens across more than 4,000 runs—we find that the answer is “substantially, and unavoidably.” Even under optimal conditions—best model, best temperature, temperature chosen specifically to minimize fabrication—the floor is non-zero and rises steeply with context length. At 32K, the best model (GLM 4.5) fabricates 1.19% of answers, top-tier models fabricate 5–7%, and the median model fabricates roughly 25%.

    • Zink@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      I’m no expert and don’t care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.

      So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly “this passes for something a person on the internet might write.”

      It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.

      What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it’s done at computer speed and global scale!

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        5 hours ago

        Aka being wrong, but with a fancy name!

        When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.

        • Scipitie@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          15
          arrow-down
          1
          ·
          4 hours ago

          Accepting concepts like “right” and “wrong” gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.

          To be precise:

          LLMs can’t be right or wrong because the way they work has no link to any reality - it’s stochastics, not evaluation. I also don’t like the term halluzination for the same reason. It’s simply a too high temperature setting jumping into a closeby but unrelated vector set.

          Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It’s then a “oh but wen make them better!” And their marketing departments overjoy.

          To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.

          We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though…

            • Scipitie@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              4 hours ago

              That’s my problem: any single word humanizes the tool in my opinion. Iperhaps something like “stochastic debris” comes close but there’s no chance to counter the common force of pop culture, Corp speak a and humanities talent to see humanoid behavior everywhere but each other. :(

        • bad1080@piefed.social
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          if you have a lobby you get special names, look at the pharma industry who coined the term “discontinuation syndrome” for a simple “withdrawal”