

We should are least refer to inference LLMs as LLMs. The fact that if you asked it something like who is the current CS2 top team, it would give you the top team at the time it was trained is enough proof that the models effectively know nothing.
We should are least refer to inference LLMs as LLMs. The fact that if you asked it something like who is the current CS2 top team, it would give you the top team at the time it was trained is enough proof that the models effectively know nothing.
The guy at work who managed git before me, well didn’t quite have the knowledge I do and was not using LFS. In one of the main repos a 200mb binary was pushed 80+ times. This is not the only file that this happened to. Even if you do a shallow clone, you still need to add the commit depth eventually. It’s a nightmare.
I am not sure how correct you are but you are probably more correct than a lot of responses.
That is kind of funny, sure it parses human speech but when you use the method for communicating letters and numbers very clearly, it breaks.
Undocumented feature flag in a plug-in, that changes the behavior drastically when in any deployment mode.