It’s like getting your enemy to tell the truth. You know, we’re not in some movie or TV series, and in the future he’ll lie more and more convincingly and also refuse to answer uncomfortable questions.
I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.
I asked this what it thought of the US governments pivot on trans rights and it similarly did not believe the last 7 months could have happened.
I had to get it to read the Wikipedia article on the year 2025 and it actually decided to stop reading.
ETA: to clarify, it was using a fetch tool to read the page, and decided to stop reading more chunks of the page.
It’s like getting your enemy to tell the truth. You know, we’re not in some movie or TV series, and in the future he’ll lie more and more convincingly and also refuse to answer uncomfortable questions.
I try not to get facts from LLMs ever
I do use RAG and tools to push content into them for summarization/knowledge extraction.
But even then it’s important to have an idea of your model biases. If you train a model that X isn’t true, then ask it to find info on the topic it’s going to return crap results.