• phutatorius@lemmy.zip
    link
    fedilink
    English
    arrow-up
    17
    ·
    12 hours ago

    Yeah, it sounds more like ML. That’s a good thing, For one thing, it’s reproducible.

    LLMs are intrinsically unfit for use in any situation where human life or health is at stake.

    • EvilBit@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      10 hours ago

      Exactly. People keep shoehorning Large Language Models into non-linguistic domains, and that’s dangerous. Human language, with respect to the training sets used, is inherently subjective and imperfect. Healthcare is very fault-intolerant.