qaz@lemmy.world to Programmer Humor@programming.devEnglish · 5 days agoSeptlemmy.worldimagemessage-square70fedilinkarrow-up1638arrow-down15
arrow-up1633arrow-down1imageSeptlemmy.worldqaz@lemmy.world to Programmer Humor@programming.devEnglish · 5 days agomessage-square70fedilink
minus-squareXylight@lemdro.idlinkfedilinkEnglisharrow-up0·2 days agoThat is a thing, and it’s called quantization aware training. Some open weight models like Gemma do it. The problem is that you need to re-train the whole model for that, and if you also want a full-quality version you need to train a lot more. It is still less precise, so it’ll still be worse quality than full precision, but it does reduce the effect.
minus-squaremudkip@lemdro.idcakelinkfedilinkEnglisharrow-up1·7 hours agoYour response reeks of AI slop
minus-squaremudkip@lemdro.idcakelinkfedilinkEnglisharrow-up1·3 hours agoIs it, or is it not, AI slop? Why are you using so heavily markdown formatting? That is a telltale sign of an LLM being involved
minus-squareXylight@lemdro.idlinkfedilinkEnglisharrow-up1arrow-down1·2 hours agoI am not using an llm but holy bait Hop off the reddit voice
minus-squaremudkip@lemdro.idcakelinkfedilinkEnglisharrow-up1·2 hours ago…You do know what platform you’re on? It’s a REDDIT alternative
That is a thing, and it’s called quantization aware training. Some open weight models like Gemma do it.
The problem is that you need to re-train the whole model for that, and if you also want a full-quality version you need to train a lot more.
It is still less precise, so it’ll still be worse quality than full precision, but it does reduce the effect.
Your response reeks of AI slop
4/10 bait
Is it, or is it not, AI slop? Why are you using so heavily markdown formatting? That is a telltale sign of an LLM being involved
I am not using an llm but holy bait
Hop off the reddit voice
…You do know what platform you’re on? It’s a REDDIT alternative