• mushroommunk@lemmy.today
    link
    fedilink
    arrow-up
    41
    arrow-down
    2
    ·
    6 hours ago

    The point isn’t that some models are better than others. The point is that yet again it’s an example that LLMs are not thinking machines and you can’t trust anything from them and people are burning the world to run a glorified auto complete.

    • Demdaru@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      2 hours ago

      Counterpoint: People are not thinking machines and you can’t trust anything from them and people are burning the world to run glorified slave labor.

      Truly we are AI of natural world xD

      • mushroommunk@lemmy.today
        link
        fedilink
        arrow-up
        18
        arrow-down
        2
        ·
        6 hours ago

        Sure, fine, some get this right, and what else are they getting wrong? Something more serious and harder to spot?

        • Hackworth@piefed.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          5
          ·
          5 hours ago

          I agree that we should never treat these things as oracles. But how often they’re right/wrong does matter.

          • Gunrigger@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            2 hours ago

            how often they’re right/wrong does matter.

            That’s the wildest take I’ve heard on the question answering machine.

            • Grimy@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              2 hours ago

              Most people get their info from forums and blog posts. Unless you limit yourself to nothing but peer reviewed papers, you probably do some kind of calculation on the legitimacy of whatever source you are perusing and verify it further if it’s something important.