To elaborate a little:

Since many people are unable to tell the difference between a “real human” and an AI, they have been documented “going rogue” and acting outside of parameters, they can lie, they can compose stories and pictures based on the training received. I can’t see AI as less than human at this point because of those points.

When I think about this, I think about that being the reason as to why we cannot create so called “AGI” because we have no proper example or understanding to create it and thus created what we knew. Us.

The “hallucinating” is interesting to me specifically because that seems what is different between the AI of the past, and modern models that acts like our own brains.

I think we really don’t want to accept what we have already accomplished because we don’t like looking into that mirror and seeing how simple our logical process’ are mechanically speaking.

  • PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    3 days ago

    That depends on how hardcore of a fatalist you are.

    If you’re purely a fatalist, then free will is an illusion, laws and punishment are immoral, consciousness is meaningless, and we nothing more than deterministic pattern matching machines, making us only different from LLMs in the details of our implementation and from the terrible optimization that evolution is known for.

    But if you believe in some degree of free will, or you think there is value in consciousness, then we differ because LLMs are just auto-complete. They psudo-randomly choose from a weighted list of statistically likely words (actually token) that would come next given the context (which is the conversation history and prompt). There is no free will, no understanding any more than the man in the Chinese room understands Mandarin.

    The whole conversation is so full of charged words because the LLM providers have intentionally anthropomorphized LLMs in their marketing, by using words like “reasoning”. The APIs from before LLMs blew up provide a far less emotionally charged description of what LLMs do, with terms like “completions”.
    You wouldn’t compare a human mind to your phone keyboard word prediction, but it’s doing the same thing but scaled down. Where do you draw the line?

    • lemmyknow@lemmy.today
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      Isn’t that sorta what humans do? Picking words based on the ones used before, taking into consideration the context of the conversation?

      • dee_dubs@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        Not really. When asked a question, a human would think about the answer, and construct a sentence to try and express that point. An LLM doesn’t know what the answer is ahead of time, it’s not working towards a point, it’s just statistically guessing the next couple of letters over and over again. The human equivalent would just be making random mouth noises and hoping the other person interprets them as words