Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • m0darn@lemmy.ca
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    13 hours ago

    Isn’t it a probabilistic extrapolation? Isn’t that what a guess is?

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      3
      ·
      edit-2
      10 hours ago

      It’s a Large Language Model. It doesn’t “know” anything, doesn’t think, and has zero metacognition. It generates language based on patterns and probabilities. Its only goal is to produce linguistically coherent output - not factually correct one.

      It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

      So no, it doesn’t “guess.” It doesn’t even know it’s answering a question. It just talks.

      • SuspciousCarrot78@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        1 hour ago

        Fair point. Counter point -

        Language itself encodes meaning. If you can statistically predict the next word, then you are implicitly modeling the structure of ideas, relationships, and concepts carried by that language.

        You don’t get coherence, useful reasoning, or consistently relevant answers from pure noise. The patterns reflect real regularities in the world, distilled through human communication.

        Yes, that doesn’t mean an LLM “understands” in the human sense, or that it’s infallible.

        But reducing it to “just autocomplete” misses the fact that sufficiently rich pattern modeling can approximate aspects of reasoning, abstraction, and knowledge use in ways that are practically meaningful, even if the underlying mechanism is different from human thought.

        TL;DR: it’s a bit more than just a fancy spell check. ICBW and YMMV but I believe I can argue this claim (with evidence if so needed).

        • Iconoclast@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          No, I completely agree. My personal view is that these systems are more intelligent than the haters give them credit for, but I think this simplistic “it’s just autocomplete” take is a solid heuristic for most people - keeps them from losing sight of what they’re actually dealing with.

          I’d say LLMs are more intelligent than they have any right to be, but not nearly as intelligent as they can sometimes appear.

          The comparison I keep coming back to: an LLM is like cruise control that’s turned out to be a surprisingly decent driver too. Steering and following traffic rules was never the goal of its developers, yet here we are. There’s nothing inherently wrong with letting it take the wheel for a bit, but it needs constant supervision - and people have to remember it’s still just cruise control, not autopilot.

          The second we forget that is when we end up in the ditch. You can’t then climb out shaking your fist at the sky, yelling that the autopilot failed, when you never had autopilot to begin with.

          • SuspciousCarrot78@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            43 minutes ago

            I think were probably on the same page, tbh. OTOH, I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

            I like your cruise control+ analogy. Its not quite self driving… but, it’s not quite just cruise control, either. Something half way.

            LLMs don’t have human understanding or metacognition, I’m almost certain.

            But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That’s weird to think about. It’s something half way.

            With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.

            And then… I don’t know what happens after that. There’s going to come a time where we cross that point and we just can’t tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.

            • Iconoclast@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              28 minutes ago

              I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

              I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.

              These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.

              I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.

      • vii@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 hours ago

        It gets things right sometimes purely because it was trained on a massive pile of correct information - not because it understands anything it’s saying.

        I know some humans that applies to

      • KeenFlame@feddit.nu
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        9 hours ago

        Yes it guesstimates what is wrong with you to argue like that about semantics?

    • vii@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      9 hours ago

      This gets very murky very fast when you start to think how humans learn and process, we’re just meaty pattern matching machines.