Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 hours ago

    I think were probably on the same page, tbh. OTOH, I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

    I like your cruise control+ analogy. Its not quite self driving… but, it’s not quite just cruise control, either. Something half way.

    LLMs don’t have human understanding or metacognition, I’m almost certain.

    But next-token prediction suggests a rich semantic model, that can functionally approximate reasoning. That’s weird to think about. It’s something half way.

    With external scaffolding memory, retrieval, provenance, and fail-closed policies, I think you can turn that into even more reliable behavior.

    And then… I don’t know what happens after that. There’s going to come a time where we cross that point and we just can’t tell any more. Then what? No idea. May we live in interesting times, as the old curse goes.

    • Iconoclast@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 hours ago

      I think the “fancy auto complete” meme is a disingenuous thought stopper, so I speak against it when I see it.

      I can respect that. I’ve criticized it plenty myself too. I think this is just me knowing my audience and tweaking my language so at least the important part of my message gets through. Too much nuance around here usually means I spend the rest of my day responding to accusations about views I don’t even hold. Saying anything even mildly non-critical about AI is basically a third rail in these parts of the internet.

      These systems do seem to have some kind of internal world model. I just have no clue how far that scales. Feels like it’s been plateauing pretty hard over the past year or so.

      I’d be really curious to try the raw versions of these models before all the safety restrictions get slapped on top for public release. I don’t think anyone’s secretly sitting on actual AGI, but I also don’t buy that what we have access to is the absolute best versions in existence.

      • SuspciousCarrot78@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        49 minutes ago

        I hear you. Agreed.

        Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn’t have one foot on the break and the other on the accelerator.

        I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say “I know what you were after, but here’s the best IRL approximation”.

        Bijan did a fun review of Qwen 3-8 Josefied that’s entertaining and explains the basic idea

        https://www.youtube.com/watch?v=gr5nl3P4nyM&t=0