• Treczoks@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    1 hour ago

    He combines LLMs with numbers and wonders why this does not work? Under which rock does he live?

    • festus@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      55 minutes ago

      I think you missed the point of his post. His issue is that the numeric operations the phone executes to run the LLM is producing garbage. Arguably this could break all kinds of neural networks, such as voice transcription. He’s not complaining that the LLMs are themselves unable to properly perform math.

      • Morphit @feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        31 minutes ago

        He also had it work on a Mac, an iPhone 15 and an iPhone 17. Only his iPhone 16 got the internal LLM state wrong. It’d be interesting to know how a failure like that happens. Presumably most iPhone 16s have a working NPU. Apple would surely want to get to the bottom of this but I doubt they would be open about their findings. Maybe they do know but the solution is ‘buy new iPhone’.

    • partial_accumen@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      30 minutes ago

      Under which rock does he live?

      Under the rock where reading comprehension exists apparently.

      Where he was prompting for “What is 2+2?” to the LLMs, the accuracy of the answer was immaterial. At that step he was comparing two systems and simply needed a static question to give both system to compare the internal processes to determine why they arrived at different outputs (or a what appeared to be race condition/infinite loop for one) when the result should be identical to both irrespective of how right or wrong the answer is to the prompt. The LLM answer from the LLM could have been “ham sandwich” and it still would have served his purposes.