Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 hours ago

    I hear you. Agreed.

    Have you tried running your own local llm? Abliterated ones (safety theatre removed) can produce some startling results. As a bonus, newer ablit methods seem to increase reasoning ability, because the LLM doesn’t have one foot on the break and the other on the accelerator.

    I noticed that a fair bit in maths reasoning using Qwen 3-4B HIVEMIND. A normal llm will tie itself in knots trying to give you the perfect answer. An ablit one will give you the workable answer and say “I know what you were after, but here’s the best IRL approximation”.

    Bijan did a fun review of Qwen 3-8 Josefied that’s entertaining and explains the basic idea

    https://www.youtube.com/watch?v=gr5nl3P4nyM&t=0