Wondering if Modern LLMs like GPT4, Claude Sonnet and llama 3 are closer to human intelligence or next word predictor. Also not sure if this graph is right way to visualize it.

  • Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    5
    ·
    1 month ago

    yeah yeah I’ve heard this argument before. “What is learning if not like training.” I’m not going to define it here. It doesn’t “think”. It doesn’t have nuance. It is simply a prediction engine. A very good prediction engine, but that’s all it is. I spent several months of unemployment teaching myself the ins and outs, developing against llms, training a few of my own. I’m very aware that it is not intelligence. It is a very clever trick it pulls off, and easy to fool people that it is intelligence - but it’s not.

    • SorteKanin@feddit.dk
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      1 month ago

      But how do you know that the human brain is not just a super sophisticated next-thing predictor that by being super sophisticated manages to incorporate nuance and all that stuff to actually be intelligent? Not saying it is but still.

      • Scrubbles@poptalk.scrubbles.tech
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        1 month ago

        Because we have reason, understanding. Take something as simple as the XY problem. Humans understand that there are nuances to prompts and questions. I like the XY because a human knows to step back and ask “what are you really trying to do?”. AI doesn’t have that capability, it doesn’t have reasoning to say “maybe your approach is wrong”.

        So, I’m not the one to define what it is or on what scale. But I can say that it’s not human intelligence.