return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 day agoTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comexternal-linkmessage-square25fedilinkarrow-up1261arrow-down17
arrow-up1254arrow-down1external-linkTesting suggests Google's AI Overviews tell millions of lies per hourarstechnica.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 1 day agomessage-square25fedilink
minus-square8oow3291d@feddit.dklinkfedilinkEnglisharrow-up1arrow-down2·23 hours ago LLMs don’t have any intentions. Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions. The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
minus-squaresupamanc@lemmy.worldlinkfedilinkEnglisharrow-up2·18 hours agoAn LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.
minus-squaredeliriousdreams@fedia.iolinkfedilinkarrow-up3·22 hours agoThe people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.
Eh. The output from LLMs is usually pretty goal-oriented, so it arguably has intentions.
The LLM is not designed to deceive though, so in that sense it is correct that it is not lies.
An LLM is a statistical modeling tool. It doesn’t have goals. It can’t have intentions. It just outputs according to an algorithm.
The people who program, run and upkeep the LLM have intentions. The LLM is not a sapient or sentient entity.