The point isn’t that some models are better than others. The point is that yet again it’s an example that LLMs are not thinking machines and you can’t trust anything from them and people are burning the world to run a glorified auto complete.
Most people get their info from forums and blog posts. Unless you limit yourself to nothing but peer reviewed papers, you probably do some kind of calculation on the legitimacy of whatever source you are perusing and verify it further if it’s something important.
The point isn’t that some models are better than others. The point is that yet again it’s an example that LLMs are not thinking machines and you can’t trust anything from them and people are burning the world to run a glorified auto complete.
Counterpoint: People are not thinking machines and you can’t trust anything from them and people are burning the world to run glorified slave labor.
Truly we are AI of natural world xD
My point was that some models are better than others.
Sure, fine, some get this right, and what else are they getting wrong? Something more serious and harder to spot?
I agree that we should never treat these things as oracles. But how often they’re right/wrong does matter.
That’s the wildest take I’ve heard on the question answering machine.
Most people get their info from forums and blog posts. Unless you limit yourself to nothing but peer reviewed papers, you probably do some kind of calculation on the legitimacy of whatever source you are perusing and verify it further if it’s something important.