return2ozma@lemmy.world to Technology@lemmy.worldEnglish · 12 days agoHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comexternal-linkmessage-square117linkfedilinkarrow-up1376arrow-down113cross-posted to: [email protected]
arrow-up1363arrow-down1external-linkHuge Study of Chats Between Delusional Users and AI Finds Alarming Patternsfuturism.comreturn2ozma@lemmy.world to Technology@lemmy.worldEnglish · 12 days agomessage-square117linkfedilinkcross-posted to: [email protected]
minus-squareaffenlehrer@feddit.orglinkfedilinkEnglisharrow-up3·12 days agoAlso, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).
Also, the LLM is just predicting it, it’s not selecting it. Additionally it’s not limited to the role of assistant, if you (mis) configure the inference engine accordingly it will happily predict user tokens or any other token (tool calls etc).