• panda_abyss@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    5 hours ago

    Yea, but those are all using heaps of proprietary heuristics.

    The beauty of LLMs and one of their most useful tasks is taking unstructured natural language content and converting it into structured machine readable content.

    The core transformer architecture was original designed for translation, and this is basically just a subset of translation.

    This is basically an optimal use case for LLMs.

    • MolochHorridus@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      ·
      5 hours ago

      Quite obviously not the optimal use case. “The tensor outputs on the 16 show numerical values an order of magnitude wrong.”

      • JPAKx4@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        That’s the hardware issue he was talking about, it has no relation to the effectiveness of the usage of the LLM. It sounded to be mostly a project he was doing for fun rather then out of necessity