themachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · 12 days agoDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comexternal-linkmessage-square18linkfedilinkarrow-up1125arrow-down12
arrow-up1123arrow-down1external-linkDell admits consumers don’t care about AI PCs, Dell is now shifting it focus this year away from being ‘all about the AI PC.’www.theverge.comthemachinestops@lemmy.dbzer0.com to Technology@lemmy.worldEnglish · 12 days agomessage-square18linkfedilink
minus-squareRobotToaster@mander.xyzlinkfedilinkEnglisharrow-up0·12 days agoDo NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
minus-squareSuspciousCarrot78@lemmy.worldlinkfedilinkEnglisharrow-up1·edit-212 days agoNPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
minus-squareL_Acacia@lemmy.mllinkfedilinkEnglisharrow-up1·12 days agoThe support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.
Do NPUs/TPUs even work with ComfyUI? That’s the only “AI PC” I’m interested in.
NPUs yes, TPUs no (or not yet). Rumour has it that Hailo is meant to be releasing a plug in NPU “soon” that accelerates LLM.
The support is bad for custom nodes and NPUs are fairly slow compared to GPUs (expect 5x to 10x longer generation time compared to 30xx+ GPUs in best case scenarios) NPUs are good at running small models efficiently, not large LLM / Image models.