The point is, the LLM is not ‘lying’ to you. It’s showing you information. It doesn’t ‘know’ whether the information is true or not. It also doesn’t ‘care’. Because it is a statistical model and is incapable of those things. And if you scroll back to my initial point, I said “technically, it’s not lying, because lying requires intent to deceive, and LLMs don’t have intent”
Because 1) it’s true and the article is a bit misleading as to whom is actually doing the lying and 2) it’s important to remember that LLM are not sentient and to push back against the tide of language which subtly suggests they are.
OK, so I don’t blame the GPUs crunching out the LLM lies, or the HTML on the page, I blame Google the company that programmed them.
The point is, the LLM is not ‘lying’ to you. It’s showing you information. It doesn’t ‘know’ whether the information is true or not. It also doesn’t ‘care’. Because it is a statistical model and is incapable of those things. And if you scroll back to my initial point, I said “technically, it’s not lying, because lying requires intent to deceive, and LLMs don’t have intent”
What’s the point of making this semantic difference though?
Because 1) it’s true and the article is a bit misleading as to whom is actually doing the lying and 2) it’s important to remember that LLM are not sentient and to push back against the tide of language which subtly suggests they are.