themachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-23 days agoDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comexternal-linkmessage-square18fedilinkarrow-up1427arrow-down13
arrow-up1424arrow-down1external-linkDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comthemachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · edit-23 days agomessage-square18fedilink
minus-squareRobotToaster@mander.xyzlinkfedilinkEnglisharrow-up1·3 days agoDo NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
minus-squareL_Acacia@lemmy.mllinkfedilinkEnglisharrow-up2·2 days agoThe support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
minus-squareSuspciousCarrot78@lemmy.worldlinkfedilinkEnglisharrow-up1·edit-22 days agoNPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
minus-squareFermiverse@gehirneimer.delinkfedilinkarrow-up1·3 days agohttps://github.com/patientx/ComfyUI-Zluda Works with the 395+
Do NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
The support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
NPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
https://github.com/patientx/ComfyUI-Zluda
Works with the 395+