• minkymunkey_7_7@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      21 hours ago

      We all end up looking at cats in boxes pictures on the internet whenever we start to try to understand oh wow this cat is funny.

    • Naz@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      21 hours ago

      I like watching it in action. I don’t know what the hell is going on, but it gives me a strange kind of peace, and if you stare at it long enough, you trick yourself into thinking that it makes some kind of sense.

    • ranzispa@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      19 hours ago

      I can tell a piece of software to do the maths for ms. Sometimes the results appear to work with reality.

      People complain about LLMs hallucinating, but they have no idea of how many assumptions and just plain “everybody does it this way, I guess it works” are there in scientific research.

      • ptu@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        18 hours ago

        It’s called the heuristic method and those doing it know the limitations. Whereas LLMs will just confidently put out garbage claiming it true.

        • ranzispa@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          Scientific calculations - and other approaches as well - put out garbage all the time, that is the main point of what I said above.

          Some limitations are known, just like it is known that LLMs have the limitation of hallucinating.

          • ptu@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            2
            ·
            15 hours ago

            I didn’t notice your critique on the outcome of results, but how they were achieved. LLM’s hallucinating are making computers make ”human errors”, which makes them less deterministic, the key reason I prefer doing some things on a computer.

      • cub Gucci@lemmy.today
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        18 hours ago

        The different domain though. LLM hallucinations may lead to catastrophe, while assuming infinite mass of an electron in absence of electromagnetic field is neat

        • ranzispa@mander.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          Calculations will happily tell you that an acutely toxic drug is the best way to cure cancer.

          The reason why that does not lead to catastrophe is that there are many checks and safety nets in place in order not to blindly trust any result.

          The exact same approach can be applied to an LLM.