• m_‮f@discuss.online
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I think pointing out the circular definition is important, because even in this comment, you’ve said “To be aware of the difference between self means to be able to [be aware of] stimuli originating from the self, [be aware of] stimuli not from the self, …”. Sure, but that doesn’t provide a useful framework IMO.

    For qualia, I’m not concerned about the complexity of the human brain, or different neural structures. It might be hard with our current knowledge and technology, but that’s just a skill issue. I think it’s likely that at some point, humankind will be able to compare two brains with different neural structures, or even wildly different substrates like human brain vs animal, alien, AI, whatever. We’ll have a coherent way of comparing representations across those and deciding if they’re equivalent, and that’s good enough for me.

    I think we agree on LLMs and chess engines, they don’t learn as you use them. I’ve worked with both under the hood, and my point is exactly that: they’re a good demonstration that awareness (i.e. to me, having a world model) and learning are related but different.

    Anyways, I’m interested in hearing more about your project if it’s publicly available somewhere

    • AnarchoEngineer@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 hours ago

      Edit: removed because I accidentally commented the exact same thing twice since the post button didn’t seem to work the first time lol