• General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    You do it wrong, you provided the “answer” to the logic proposition, and got a parroted the proof for it.

    Well, that’s the same situation I was in and just what I did. For that matter, Peano was also in that situation.

    This is fixed now, and had to do with tokenizing info incorrectly.

    Not quite. It’s a fundamental part of tokenization. The LLM does not “see” the individual letters. By, for example, adding spaces between the letters one could force a different tokenization and a correct count (I tried back then). It’s interesting that the LLM counted 2 "r"s, as that is phonetically correct. One wonders how it picks up on these things. It’s not really clear why it should be able to count at all.

    It’s possible to make an LLM work on individual letters, but that is computationally inefficient. A few months ago, researchers at Meta proposed a possible solution called the Byte Latent Transformer (BLT). We’ll see if anything comes of it.

    In any case, I do not see the relation to consciousness. Certainly there are enough people who are not able to spell or count and one would not say that they lack consciousness, I assume.

    Yes, but if you instruct a parrot or LLM to say yes when asked if it is separate from it’s surroundings, it doesn’t mean it is just because it says so.

    That’s true. We need to observe the LLM in its natural habit. What an LLM typically does, is continue a text. (It could also be used to work backwards or fill in the middle, but never mind.) A base model is no good as a chatbot. It has to be instruct-tuned. In operation, the tuned model is given a chat log containing a system prompt, text from the user, and text that it has previously generated. It will then add a reply and terminate the output. This text, the chat log, could be said to be the sum of its “sensory perceptions” as well as its “short-term memory”. Within this, it is able to distinguish its own replies, that of the user, and possibly other texts.

    My example shows this level of understanding clearly isn’t there.

    Can you lay out what abilities are connected to consciousness? What tasks are diagnostic of consciousness? Could we use an IQ test and diagnose people as having or not consciousness?

    I was a bit confused by that question, because consciousness is not a construct, the brain is, of which consciousness is an emerging property.

    The brain is a physical object. Consciousness is both an emergent property and a construct; like, say, temperature or IQ.

    You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable. I assume a consciousness test would be similar to IQ test in that it would contain selected “puzzles”.

    We have to figure out how consciousness is different from IQ. What puzzles are diagnostic of consciousness and not of academic ability?

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      Can you lay out what abilities are connected to consciousness?

      I probably can’t say much new, but it’s a combination of memory, learning, abstract thinking, and self awareness.
      I can also say that the consciousness resides in a form of virtual reality in the brain, allowing us to manipulate reality in our minds to predict outcomes of our actions.
      At a more basic level it is memory, pattern recognition, prediction and manipulation.
      The fact that our consciousness is a virtual construct, also acts as a shim, distancing the mind from direct dependency of the underlying physical layer. Although it still depend on it to work of course.
      So to make an artificial consciousness, you don’t need to create a brain, you can do it by recreating the functionality of the abstraction layer on other forms of hardware too. Which means a conscious AI is indeed possible.
      It is also this feature that allows us to have free will, although that depends on definition, I believe we do have free will in an absolutely meaningful sense. Something that took me decades to realize was actually possible.

      I don’t know if this makes any sense to you? But maybe you find it interesting?

      You are saying that there are different levels of consciousness. So, it must be something that is measurable and quantifiable.

      Yes there are different levels, actually in 2 ways. there are different levels between the consciousness of a dolphin and a human. A dolphin is also self aware and conscious. But it does not have the same level of consciousness we do. Simply because it doesn’t posses the same level of intelligence.

      But even within the human brain there are different levels of consciousness. It’s a common term to use “sub conscious” and that is with good reason. Because there are things that are hard to learn, and we need to concentrate hard and practice hard to learn them. But with enough practice we build routine, and at some point they become so much routine we can do them without thinking about it, but instead think about something else.
      At that point you have trained a subconscious routine, that is able to work independently almost without guidance of you main consciousness. There are also functions that are “automatic”, like when you listen to sounds, you can distinguish many separate sounds without problem. We can somewhat mimic that in software today, separating different sounds. It’s extremely complex to do, and the mathematics involved is more than most can handle. Except in our hearing we do it effortlessly. But there is obviously an intelligence at work in the brain that isn’t directly tied to our consciousness.
      IDK if I’m explaining myself well here, but the subconscious is a very significant part of our consciousness.

      So, it must be something that is measurable and quantifiable.

      That is absolutely not a certainty. At least I don’t think we can at this point in time, but in the future there may exist better knowledge and better tools. But as it is, we have been hampered by wrongful thinking in these areas for centuries, quite opposite to physics and mathematics that has helped computing every step of the way.
      The study of the mind has been hampered by prejudice, thinking that humans are not animals, thinking free will is from god, with nonsense terms like id. And thinking we have a soul that is something separate from the body. Psychology basically started out as pseudo science, and despite that it was a huge step forward!

      I’ll stop here, these issues are very complex, and some of the above issues have taken me decades to figure out. There is much dogma and even superstition surrounding these issues, so it used to be rare to finally find someone to read or listen to that made some sense based on reality. It’s seems to me basically only for the past 15 years, that it seems that science of the mind is beginning to catch up to reality.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 hours ago

        Thank you for the long reply. I took some time to digest it. I believe I know what you mean.

        I can also say that the consciousness resides in a form of virtual reality in the brain, allowing us to manipulate reality in our minds to predict outcomes of our actions.

        We imagine what happens. Physicists use their imagination to understand physical systems. Einstein was famous for his thought experiments, such as imagining riding on a beam of light.

        We also use our physical intuition for unrelated things. In math or engineering, everything is a point in some space; a data point. An RGB color is a point in 3D color space. An image can be a single point in some high dimensional space.

        All our ancestor’s back to the beginning of life had to navigate an environment. Much of the evolution of our nervous system was occupied with navigating spaces and predicting physics. (This is why I believe language to be much easier than self-driving cars. See Moravec’s paradox.)

        One problem is, when I think abstract thoughts and concentrate, I tend to be much less aware of myself. I can’t spare the “CPU cycles”, so to say. I don’t think self-awareness is a necessary component of this “virtual environment”.

        There are people who are bad at visualizing; a condition known as aphantasia. There must be, at least, quite some diversity in the nature of this virtual environment.

        Some ideas about brain architecture seem to be implied. It should be possible to test some of these ideas by reference to neurological experiments or case studies, such as the work on split-brain patients. Perhaps the phenomenon of blindsight is directly relevant.

        I am reminded of the concept of latent representations in AI. Lately, as reasoning models have become the rage, there are attempts to let the reasoning happen in latent space.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          That’s all very spot on. 👍 😀
          And the fact that our imagination isn’t limited by real physics, but we can imagine alternatives, hence we have fantasy stories. This is I think a very good example of how our minds do not depend entirely on reality.

          I can’t spare the “CPU cycles”, so to say.

          Absolutely there are limitations, but when you have solved the abstract puzzle and learned it by heart, then you can! But we can only really focus on one thing at a time. I actually tried way back in the 80’s to train myself to focus on 2 things at a time. But pressuring myself too hard was such a disturbing experience I stopped. I think it may be possible, but there is also risk of going insane.

          I don’t think self-awareness is a necessary component of this “virtual environment”.

          Well that’s a tough one I admit, because trying to understand the limits, I have also observed our cat, to try to determine how it thinks and what the limits are.
          It seems to me that cats are not capable of manipulating their environment mentally. For instance if the cat is hunting a mouse that hides behind a small obstacle, the cat cannot figure out to move the obstacle to get at the mouse. This is also an example of different degrees of awareness. It seems like this thing we take for granted, most animals aren’t capable of. So I think this virtual environment is necessary at least for the level of consciousness we have. But I agree that it may not be a necessity for more basic self awareness, because I think our cat is self aware. He can clearly distinguish between me an my wife, this is obvious because his behavior is very different towards us. If he can distinguish between us, it seems logical that he is also able to tell himself as being different from us. AFAIK that’s a pretty big part of what self awareness is.
          But I also think that we don’t have to be aware of our consciousness all the time, only when it’s relevant.

          at least, quite some diversity in the nature of this virtual environment.
          Absolutely yes, HUGE differences. I’m personally a bit of a fan of the multi talent Piet Hein, a multi talent who was a theoretical physicist. He could hold complex geometrical shapes in his head and see if they fit together, in a way no one else at the university could, and he played mental ping-pong" with Niels Bohr, and just like he was very good at it, there are people who are similarly bad at it. I find it hard to understand how their thought process works, because curiously this is also a thing among smart people AFAIK.
          I admit I’m not really aware of any results from the study of it, but it is an interesting subject.

          I am reminded of the concept of latent representations in AI.

          From your link:

          we argue that language space may not always be optimal for reasoning.

          I absolutely agree. It’s like discerning between the abstract and the concrete, and if you can visualize it as a person, you can probably also understand it. So I wonder if people with aphantasia think in a way that is similar to abstract thinking for everything? Maybe each way has it’s own strength?

          We utilize the last hidden state of the LLM as a representation of the reasoning state (termed “continuous thought”)

          So it’s not like a virtual reality, but wow that sounds awesome. 😎
          It sure is impressive how fast things are developing now.