Lapses in safeguards led to wave of sexualized images this week as xAI says it is working to improve systems

Elon Musk’s chatbot Grok posted on Friday that lapses in safeguards had led it to generate “images depicting minors in minimal clothing” on social media platform X. The chatbot, a product of Musk’s company xAI, has been generating a wave of sexualized images throughout the week in response to user prompts.

Screenshots shared by users on X showed Grok’s public media tab filled with such images. xAI said it was working to improve its systems to prevent future incidents.

  • Buffalox@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    I am more and more convinced that it was projection when Musk called the diver that rescued children for e pedo when he criticized Musk for being in the way when Musk tried to make a stunt out of the rescue mission.

    • dreadbeef@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 days ago

      that was probably my turning point on the guy, if not earlier. As soon as I saw how he handled all of that, it reeked of rich arrogance. He didn’t do jack shit and made it about himself and increasing his net worth.

      • Insekticus@aussie.zone
        link
        fedilink
        arrow-up
        1
        ·
        19 days ago

        Yeah same. Before that I didnt think much of him (just wasnt as aware of him as a person) and then after that whole diver pedo incident I was like “wtf? What kind of mature response to a diving crisis in a third world country”?

        And after that I just kept watching from a distance as he did infrequent but stupid stunts (like sending a car into space; you may as well light a few million dollars on fire in front of some poor people and say “look how rich I am”.

        And after that I just thought the guy was a twat. Until his government days, and now I fucking despise the cunt and cant wait to hear about his downfall in his friends’ tabloid rags online.

  • recentSlinky@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    19 days ago

    How did grok get the training data to do that? Didn’t elon say before that he’s taking care of training himself? 🤔

    • turdas@suppo.fi
      link
      fedilink
      arrow-up
      0
      ·
      19 days ago

      Image models can generate things that don’t exist in the training set, that’s kind of the point.

      • RepleteLocum@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        19 days ago

        No. They can’t. Grok most likley fused children from ads and other sources where they’re lightly clothed with naked adult women. LLM’s can only create similar stuff to what they have been given.

        • turdas@suppo.fi
          link
          fedilink
          arrow-up
          1
          ·
          18 days ago

          The images aren’t generated by the LLM part of Grok, they’re generated by a diffusion image model which the LLM is enabled to prompt.

          And of course they can create things that don’t exist in the training set. That’s how you get videos of animals playing instruments and doing Fortnite dances and building houses, or slugs with the face of a cat, or fake doorbell camera videos of people getting sucked into tornadoes. These are all brand new brainrot that definitely did not exist in the training set.

          You clearly do not understand how diffusion models work.