swiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · edit-22 days agoLavalamp too hotdiscuss.tchncs.deimagemessage-square67fedilinkarrow-up1475arrow-down113
arrow-up1462arrow-down1imageLavalamp too hotdiscuss.tchncs.deswiftywizard@discuss.tchncs.de to Programmer Humor@programming.dev · edit-22 days agomessage-square67fedilink
minus-squareFeathercrown@lemmy.worldlinkfedilinkEnglisharrow-up7·edit-216 hours agoHmm, interesting theory. However: We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation. LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
minus-squareJerkface (any/all)@lemmy.calinkfedilinkEnglisharrow-up4·14 hours agoit was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
minus-squareMotoAsh@piefed.sociallinkfedilinkEnglisharrow-up1arrow-down3·15 hours agoOf course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
minus-squareFeathercrown@lemmy.worldlinkfedilinkEnglisharrow-up2·9 hours agoI don’t think this really addresses my second point.
Hmm, interesting theory. However:
We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.
LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
I don’t think this really addresses my second point.