

Most AI platforms allow sexualized content to varying degrees. Google, Instagram, Tiktok, etc all host CSAM. Always have. It’s been understood that they’re not liable to the extent that they remove it when reported. Their detection technology is pretty good as to automatically handle it, but never perfect. They keep track of origins and comply with subpoenas which has gotten tons of people convicted.
Grok image gen was put behind a paywall, which people claim is worse. However, most people paying lose anonymity and thus can be appropriately handled when they request illicit content, even if Grok refuses the request like it usually does.
I think the Grok issue is sensationalized and taken out of context of the realities of what happens online and in law enforcement.





They are way more deadly than ICE or USBP. You think two people tipped the scale?