I was watching this video and at the 8:00 minute mark, they say that popcorn does not have gluten. To prove this point, they edit in a screenshot showing the first result of google for “does popcorn have gluten,” which is the ai answer. I’ve seen similar in other videos or reels and it feels forced in a way. And to me, it doesn’t prove their claim correct because it’s the ai answer.
I don’t know, I’ve just noticed this more recently and wanted to make sure I wasn’t going crazy.
People are really stupid, almost no one has a working knowledge of LLMs unless they are actively coding one
And we are getting to the point with iterative training that soon no human will understand how the context black box works.
Considering what we have access to now, I have no doubt that there are already private models that the devs have no insight into the tokenization process
You don’t need to fully understand how an LLM works at a deep level to know that it doesn’t in any way check if what it’s outputting corresponds to truth - it doesn’t check the meaning of it at all.
That’s not exactly true for the last and current generation, there are coach expert systems that verify certain outputs before they’re ever presented to the consumer but still are only about 75% useful, though that number is growing.
Still less reliable than a subject expert human though