• 0 Posts
  • 22 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • This sounds real Nazi adjacent. Not far from Russia’s more conservative swing, either.

    …I will acknowledge this is a issue. Since we’re apparently going to blow up immigration (which skews young), the US now has real “aging population” problem like South Korea and Japan are facing, and that are coming for China and Russia soon.

    In other words, it’s not all made up.

    I’m speaking as a guy, so my perspective isn’t the most relevant, but… to me, everything in that document sounds like a great way to turn off women that aren’t already steeped in this nuclear family culture anyway. Like, I have a couple in the family that’s literally the exact target of this campaign, and they won’t like that one bit.









  • I am late to this argument, but data center imagegen is typically batched so that many images are made in parallel. And (from the providers that aren’t idiots), the models likely use more sparsity or “tricks” to reduce compute.

    Task energy per image is waaay less than a desktop GPU. We probably burnt more energy in this thread than in an image, or a few.

    And this is getting exponentially more efficient with time, in spite of what morons like Sam Altman preach.


    There’s about a billion reasons image slop is awful, but the “energy use” one is way overblown.






  • It would depends on how much infrastructure they can pass to receive them.

    Ideally it would be “unlimited.”

    Immigration is just good for the US economy because they tend to skew young and (to be blunt) low wage when they get here, and just look at how far immigrants go here. In most countries, it’s supposed to be a cornerstone of US culture, and the country is freaking huge.

    Integration? America was originally a hodgepodge of homesteads; that’s the idea.

    The limiting factor is housing, schooling, occupation, just having somewhere for them to go and live.

    TL;DR: As many as possible as long as they aren’t forced into poverty.


    …Hence, I find it incredible that we, as a country, collectively decided to squander that massive strategic advantage for… what?

    It just doesn’t make any sense, even if you set morality aside. Or truly believe in the propaganda that they’re responsible for most crime, which is nonsense.



  • brucethemoose@lemmy.worldtoComic Strips@lemmy.worldUnited Nations
    link
    fedilink
    arrow-up
    38
    arrow-down
    2
    ·
    3 days ago

    Beating climate change is (mostly) a mind game. Societal changes needed aren’t too dramatic, but that’d make a few uber-wealthy less rich right this second, so here we are.

    So what this comic really implies is “we beat corporate propaganda! We spread a scientific message to the masses!”

    Now that would be something.


  • It’s not so much about English as it is about writing patterns. Like others said, it has a “stilted college essay prompt” feel because that’s what instruct-finetuned LLMs are trained to do.

    Another quirk of LLMs is that they overuse specific phrases, which stems from technical issues (training on their output, training on other LLM’s output, training on human SEO junk, artifacts of whole-word tokenization, inheriting style from its own previous output as it writes the prompt, just to start).

    “Slop” is an overused term, but this is precisely what people in the LLM tinkerer/self hosting community mean by it. It’s also what the “temperature” setting you may see in some UIs is supposed to combat, though that crude an ineffective if you ask me.

    Anyway, if you stare at these LLMs long enough, you learn to see a lot of individual model’s signatures. Some of it is… hard to convey in words. But “Embodies” “landmark achievement” and such just set off alarm bells in my head, specifically for ChatGPT/Claude. If you ask an LLM to write a story, “shivers down the spine” is another phrase so common its a meme, as are specific names they tend to choose for characters.

    If you ask an LLM to write in your native language, you’d run into similar issues, though the translation should soften them some. Hence when I use Chinese open weights models, I get them to “think” in Chinese and answer in English, and get a MUCH better result.

    All this is quantifiable, by the way. Check out EQBench’s slop profiles for individual models:

    https://eqbench.com/creative_writing_longform.html

    https://eqbench.com/creative_writing.html

    And it’s best guess at inbreeding “family trees” for models:

    inbreed