







I mean, you want to go hard ball: our attorney general is who actually fired the prime minister and he was a CIA plant.


Okay fair enough, first bit made sense to me at the time but I was having a weird day. There are things you can’t prove was the point of that, doesn’t make a lot of sense rereading it.
Rest of it is saying that there is no debate worth having: they are not conscious, nor sentient, and trying to quantify what exactly they are. I’m just talking into the void and arguing against opinions ive seen. Dont mind me. I get passionate and am prone to long-winded rants.


You can’t prove all ravens are black. The discovery of even one white raven would disprove the “fact” that all ravens are black, and we can by no means be sure that we gathered all ravens to test the theory.
However, we can look around and comment that there doesn’t appear to be any white ravens anywhere…
Do you know about the ‘bobo’ and ‘kiki’ study - can’t remember the name? People made up words that don’t exist in English and asked people whether round objects are more bobo or kiki. AI can’t answer this question - not without being fed how to. Toddlers could answer it. It comes down to how it consumes information and if there’s no pattern… When asked to define words it had been rarely fed, I.e. usernames people had made up, the AIs apparent consciousness breaks down. As soon as something isn’t likely followed by another word, the machine breaks and no one would pretend it has consciousness after that.
Learning models are just pattern recognition machines. LLMs are the kind that mix and match words really well. This makes them seem intelligent, but it just means they can express language and information in a way we understand, and tend to not do so. Consciousness gets into the “what is the soul” territory, so I’m staying away from it. The best I can say of AI is its interesting that language appears to be a system constructed well enough that we can teach it to machines. Even more so we anthropomorphise models when they do it well.
AI doesn’t have memory, it can’t think for itself - it references what it has consumed - and it can’t teach itself new tricks. All of these are experimental research areas for AI. All of them lend to consciousness. Its just very good at sentence generation.


Right, a question that literal neuroscientists couldn’t answer.
I believe the technical term is “your brain is way more fucking complex”. We have like 50 (I’m not a neuroscientist, just studied AI) chemicals being transmitted around the brain, frequently. They’re used and passed on by cells which do biological and chemical things I dont understand. Ever heard of dopamine, cortisol, serotonin? AI dont got those. We have neurons that don’t connect to every other neuron - only tech Bros would think that’s an acceptable expression. Our brain forms literal pathways, along which it transmits those chemicals. No, a physical connection is not the same as a higher average weight, and the people who came up with AI maths in the 50s would back me up.
AI uses floating point maths to draw correlations and make inferences. More advanced AI does this more per second and has had more training. Their neurons are a programming abstraction used to explain a series of calculations and inputs, they’re not actually a neuron, nor an advanced piece of tech. They’re not magic.
High schoolers could study AI for a single class, then neurobiology right after and realise just how basic the AI model is when mimicking a brain. Its not even close, but I guess Sam Altman said we’re approaching general intelligence so I’m probably just a hater.


OpenVAS is a vulnerability scanner. Metasploit is a penetration testing framework.
First one does what OP wants. Second one less so, and is more hands on.
See dirbuster for automated dumb searching of web directories, gives you response codes to tell you if a page is accessible to the outside world. See nuclei which I haven’t used myself but seems to get good reviews for automated vuln scanning from the command line - has nice output and seems simple to use.
They’re both easy to use and install on something like Kali Linux.


Read up on the time Christianity started a war with itself over flat bread vs not flat bread. People love to control others and its just so fucking weird. Ive got enough social anxiety and life problems without thinking about controlling what everyone else does.


Depends what you want to do.
On distrobox, I installed a containerised version of Ubuntu that can interact with my host, sort of like WSL on windows. Anything I put in it remains isolated so I can’t install packages that break my system - and I can use apt to install whatever in want rather than rpm.
You could develop in a VM or container like distrobox, and tbh, the host can be whatever you need it to be. You dont actually have to move off Mint.
That being said, I dont see why you couldn’t just develop on Bazzite/atomic distros of your choice using flatpaks for IDEs. I believe it has c++ installed and you’d be able to layer whatever language you needed onto your atomic distro of choice.


I really do suggest using Bazzite if you don’t want to wait for steamOS.
I previously used Mint, haven’t had to install an nvidia graphics driver or new kernel since moving to Bazzite and I’m now learning distrobox so I can make my usual bad computing decisions in a safe space. Its a very stable base, and with container tech layered on, you can have all the fuck around you want with minimal find out.


They didnt have time to playtest the game. So the AI can fire faster than a human. It was one of the things they sorted for Halo CE but never got to due to constraints. It is literally the hardest Halo game because of this. I think on legendary youre even the weakest character entity in the game.
But on the horizon, surrounding the shoppers, came the deafening roar of chickens in choppers