• hoshikarakitaridia@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      19 hours ago

      “figure it out” is not technically wrong but it’s worse than “it depends” so it’s definitely very unhelpful. Which is kind of the opposite of what an LLM should be.

      • Switorik@lemmy.zip
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        18 hours ago

        I’m only playing devils advocate. I’m not a fan of LLMs myself.

        You need to know how to use the tool to get the correct output. In this case, it’s giving you a literal answer. Craft your question in a way so that it will give you what you’re looking for. Look up “prompt engineer” for a more thorough answer. It’s how we thought LLMs were going to be to begin with.

        • hoshikarakitaridia@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          11 hours ago

          Disagree. The short term solution is for you to change your prompt but it is definitely a short-coming of the AI when the answer is strictly useless.

          It’s like crime: it should be safe everywhere anytime because of police and laws, but since it’s not, you can’t go everywhere anytime. That’s not on you, but you have to deal with it.

        • wischi@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          14 hours ago

          Though the phrase “prompt engineer” is so funny. Has literally nothing to do with engineering at all. Like having a PhD in “Google Search” 🤣