Thales@sh.itjust.works to 196@lemmy.blahaj.zoneEnglish · 2 days agoCamp Rulesh.itjust.worksimagemessage-square81fedilinkarrow-up1816arrow-down143
arrow-up1773arrow-down1imageCamp Rulesh.itjust.worksThales@sh.itjust.works to 196@lemmy.blahaj.zoneEnglish · 2 days agomessage-square81fedilink
minus-squarebrucethemoose@lemmy.worldlinkfedilinkarrow-up2·edit-26 hours agoNot a great metric either, as models with simpler output (like text embedding models, which output a single number representing ‘similarity’, or machine vision models to recognize objects) are extensively trained. Another example is NNEDI3, very primitive edge enhancement. Or Languagetool’s tiny ‘word confusion’ model: https://forum.languagetool.org/t/neural-network-rules/2225
Not a great metric either, as models with simpler output (like text embedding models, which output a single number representing ‘similarity’, or machine vision models to recognize objects) are extensively trained.
Another example is NNEDI3, very primitive edge enhancement. Or Languagetool’s tiny ‘word confusion’ model: https://forum.languagetool.org/t/neural-network-rules/2225