I’m only playing devils advocate. I’m not a fan of LLMs myself.
You need to know how to use the tool to get the correct output. In this case, it’s giving you a literal answer. Craft your question in a way so that it will give you what you’re looking for. Look up “prompt engineer” for a more thorough answer. It’s how we thought LLMs were going to be to begin with.
Disagree. The short term solution is for you to change your prompt but it is definitely a short-coming of the AI when the answer is strictly useless.
It’s like crime: it should be safe everywhere anytime because of police and laws, but since it’s not, you can’t go everywhere anytime. That’s not on you, but you have to deal with it.
I’m only playing devils advocate. I’m not a fan of LLMs myself.
You need to know how to use the tool to get the correct output. In this case, it’s giving you a literal answer. Craft your question in a way so that it will give you what you’re looking for. Look up “prompt engineer” for a more thorough answer. It’s how we thought LLMs were going to be to begin with.
Disagree. The short term solution is for you to change your prompt but it is definitely a short-coming of the AI when the answer is strictly useless.
It’s like crime: it should be safe everywhere anytime because of police and laws, but since it’s not, you can’t go everywhere anytime. That’s not on you, but you have to deal with it.
Though the phrase “prompt engineer” is so funny. Has literally nothing to do with engineering at all. Like having a PhD in “Google Search” 🤣
I guess it’s like social engineering, but for LLMs