• ranzispa@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    19 hours ago

    I can tell a piece of software to do the maths for ms. Sometimes the results appear to work with reality.

    People complain about LLMs hallucinating, but they have no idea of how many assumptions and just plain “everybody does it this way, I guess it works” are there in scientific research.

    • ptu@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      18 hours ago

      It’s called the heuristic method and those doing it know the limitations. Whereas LLMs will just confidently put out garbage claiming it true.

      • ranzispa@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Scientific calculations - and other approaches as well - put out garbage all the time, that is the main point of what I said above.

        Some limitations are known, just like it is known that LLMs have the limitation of hallucinating.

        • ptu@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          2
          ·
          15 hours ago

          I didn’t notice your critique on the outcome of results, but how they were achieved. LLM’s hallucinating are making computers make ”human errors”, which makes them less deterministic, the key reason I prefer doing some things on a computer.

    • cub Gucci@lemmy.today
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      18 hours ago

      The different domain though. LLM hallucinations may lead to catastrophe, while assuming infinite mass of an electron in absence of electromagnetic field is neat

      • ranzispa@mander.xyz
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Calculations will happily tell you that an acutely toxic drug is the best way to cure cancer.

        The reason why that does not lead to catastrophe is that there are many checks and safety nets in place in order not to blindly trust any result.

        The exact same approach can be applied to an LLM.