• ulterno@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    17 hours ago

    That doesn’t seem like a solvable thingy.
    People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.

    • magnetosphere@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      17 hours ago

      Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.

      If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        16 hours ago

        AI is pretty much possible, we are thinking about it the wrong way.

        We are expecting AI to have the 3 bests of both worlds.

        • High I/O ability : we have that from computers
        • Determinism and Correctness : computers always had a high level determinism, never correctness because a computer does not know what is correct[1]
        • Intelligence and thought : intelligence is a perception. AI will always have a lower depth of thought than us as long as it is dependent upon us

        So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.

        Of course, I don’t mean “determinism” in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can’t get the kind of logical determinism that you expect from normal computer operations.
        A dumbed down example to get my thoughts across: You can use either of a + b or ADD(A,B) or SUM(A:B) and will still get the same result.


        1. this boils down to the same thing that one person once said to some computer guy - ‘If I enter the wrong numbers, will I still get the correct answer?’ ↩︎