• AnarchoEngineer@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I think you’re getting hung up on the words rather than the content. While our definitions of terms may be rather vague, the properties I described are not cyclically defined.

    To be aware of the difference between self means to be able to sense stimuli originating from the self, sense stimuli not from the self, and learn relationships between them.

    As long as aspects of the self (like current and past thoughts) are able to be sensed (encoded into a representation which the mind can work with directly; in our case neural spike chains) exist and senses which compare those senses with other senses or past senses and finally that the mind can learn patterns in those encodings (like spiking neural nets) then it should be possible for conscious awareness to arise. (If you’re curious about the kind of learning that needs to happen you should look into Tolman-Eichenbaum machines, though non-spiking ones aren’t reallly capable of self learning)

    I hope that’s a clear enough “empirical” explanation for you.

    As for qualia, you are entirely wrong. What you describe would not prove that my raw experience of green is the same as your green, only that we both have qualia which can arise from the color green. You can say that it’s not pragmatic to think about that which cannot be known, and I’ll agree that qualia must be represented in a physical way and thus be recreatable in that persons brain, but the complexity of human brains actually precludes the ability to define what actually is the qualia and what are other thoughts. The difference between individuals likely precludes the ability to say “oh when these neurons are active it means this” because other people have different neural structures, similar? Absolutely, similar enough that for any experience you could find exactly the same neurons that would fire the same way as in someone else? Absolutely not.

    Your last statements make it seem like you don’t understand the diffference between learning and knowledge. LLMs don’t learn when you use them. Neither do most modern chess models. They actually don’t learn at all unless they are being trained by an outside source who gives them an input, expects an output, and then computes the weight changes needed to get closer to the answer via gradient descent.

    A typical ANN trained this way does not learn from new experiences furthermore, it is not capable of referencing its own thoughts because it doesn’t have any.

    The self is that which acts, did you know LLMs aren’t capable of being aware they took any action? Are you aware chess engines can’t do that either? There is no comparison mechanism between what was and what is and what made that change. They cannot be self aware the same way a program hardcoded to kill processes other than itself is unaware. They literally lack any sense of their own actions directly. Once again, you not only need to be able to sense that information, but the program then needs a sense which compares that sensation to other sensations and learns the differences, changing the way it responds to those stimuli. You need learning.

    I don’t reject the idea of machines being conscious, in fact I’m literally trying to make a conscious machine just to see if I can (which yeah to most people sounds insane). But I do not think we agree on much else because learning is absolutely essential for any thing to be capable of a conscious action.

    • m_‮f@discuss.online
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      I think pointing out the circular definition is important, because even in this comment, you’ve said “To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, …”. Sure, but that doesn’t provide a useful framework IMO.

      For qualia, I’m not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that’s just a skill issue. I think it’s likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We’ll have a coherent way of comparing representations across those and deciding if they’re equivalent, and that’s good enough for me.

      I think we agree on LLMs and chess engines, they don’t learn as you use them. I’ve worked with both under the hood, and my point is exactly that: they’re a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.

      Anyways, I’m interested in hearing more about your project if it’s publicly available somewhere

      • AnarchoEngineer@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        Edit: removed because I accidentally commented the exact same thing twice since the post button didn’t seem to work the first time lol