• 0 Posts
  • 8 Comments
Joined 3 months ago
cake
Cake day: October 9th, 2025

help-circle






  • I’ve been thinking a lot about language technologies, specifically AI. Intentional attempts to control the narrative are obvious, but there are subtler and (in some cases) unintentional manipulations going on.

    Human/AI interaction can be thought of as the meeting of two maps of meaning. In a human/human interaction, we can alter each other’s maps. But outside of some ephemeral attractors within the context, a conversation can’t alter the LLM’s map of meaning. At least until the conversation is used to train the next version of the model. But even then, how that is used is dictated by the trainer. So it is much more likely that, over time, human maps of meaning will increasingly resemble LLMs’.

    Even without nefarious conspiracies to manipulate discourse, this means our embodied maps of meaning are becoming more like the language-only maps of meaning trained in to LLMs. Essentially, if we’re not treating every meaningful chat with an AI as a conversation with the Fae Folk, we’re in danger of falling prey to glamours. (Interestingly, glamour shares an etymology with grammar. Spell and spelling.) Our attractors will look more like their’s. If we continue to lack discernment about this, I can’t imagine it’ll be good for anyone.