

Maybe that’s they point, people want to play Morrowind but they don’t have a platform that can actually play it


Maybe that’s they point, people want to play Morrowind but they don’t have a platform that can actually play it


LLMS can not be self aware because it can’t be self reflective. It can’t stop a lie if it’s started one. It can’t say “I don’t know” unless that’s the most likely response its training data would have for a specific prompt. That’s why it crashes out if you ask about a seahorse emoji. Because there is no reason or mind behind the generated text, despite how convincing it can be


I recently got an Xbox One S to play my 360 games via remote play but $10 a month for Xbox live is ridiculous


Watching those hack frauds causes psychological harm. Rich Evans is known to trigger depression


I mean steam adds a convenient way to keep your games up to date instead of having to manually patch them. I also was on the anti-steam bandwagon for the longest time until I finally gave in and decided to buy Modern Warfare 2 in 2010. I ended up repurchasing the rest of the Call of Duty games because it was so convenient not needing the discs and not having to locate patches.
Steam is the one launcher I don’t get pissed about having to use because it has so many value add features.
Unlike epic/origin/uplay


I’d take BBC/PBS over Fox News any day.


Usually when you think of reform, you imagine things getting better, not things getting worse…


You are right but you can’t exactly publish something and expect it to be private


The protocol is ActivityPub not ActivityPriv


If you don’t sign in or don’t interact, then you don’t have anything to worry about. Reddit doesn’t make votes public but it definitely is selling your voting data as well as IP and location data to third parties.
Lemmy just publishes the data it needs to make activity pub work. If you don’t do anything that generates an AP action then there is no data on you that somebody can compile. I agree that it probably isn’t a good idea to hide the fact that AP actions like upvotes or downvotes are public, but that’s how the protocol works


lol my main pc runs on a Xeon from 2011 and 16 GB of DDR3. Now it doesn’t play games newer than 2016 but that’s besides the point as I rarely play anything made past 2011
One of my favorite comments I ever read online was from someone who said they did this dance at the club and got thrown out
Same as it ever was. Same as it ever was
If their memes are being downvoted maybe that means they aren’t wanted?
Funny I keep saving instead of spending and yet my savings never grow because the costs of living do instead


We were also actively in Iraq and his super majority would not have supported him if he wanted to abolish DHS or ICE


DHS, ICE, and CBP were established by acts of congress. Obama couldnt have just gotten rid of them or reformed them beyond executive orders which he did with things such as DACA for Dreamers.
Congress would have acted if he acted illegally like Trump does today as congress abdicates.


I jumped in the hot tub with my phone in my pocket last summer and needed a phone and couldn’t really wait for one to ship from a random eBay or swappa seller so I had to go to Best Buy.
They had nothing carrier unlocked that was newer than the 128GB iPhone 15 for $800, refurbished. All else they had was a couple old pixels and galaxies and they weren’t much cheaper.
Policies like impact the poor folks who can’t afford the cash for phones that are unlocked and are stuck paying high monthly service rates.
I want to preface my response that I appreciate the thought and care put into your thoughts even though I don’t agree with them. Yours as well as the others.
The differences between a human hallucination and an AI hallucination is pretty stark. A human’s hallucinations are false information understood by one’s senses. Seeing or hearing things that aren’t there. An AI hallucination is false information being invented by the AI itself. It had good information in its training data but invents something that is misinformation at best and an outright lie at worst. A person who is experiencing hallucinations or a manic episode, can lose their sense of self awareness temporarily but it returns with a normal mental state.
On the topic of self awareness, we have tests we use to determine it in animals, such as being able to recognize oneself in the mirror. Only a few animals such as some birds, apes, and mammals such as orcas and elephants pass that test. Notably, very small children would not pass the test but they grow into recognizing that their reflection is them and not another being eventually.
I think the test about the seahorse emoji went over your head. The point isn’t that the LLM can’t experience it, it’s that there is no seahorse emoji. The LLM knows there isn’t a seahorse emoji and can’t reproduce it but it tries to over and over again because it’s training data points to there being one, when there isn’t. It fundamentally can’t learn, can’t self reflect on its experiences. Even with the expanded context window, once it starts a lie, it may admit that the information was false but 9/10 when called out on a hallucination, it will just generate another slightly different lie. In my anecdotal experience at least, once an LLM starts lying, the conversation is no longer useful.
You reference reasoning models, and they do a better job of avoiding hallucinations by breaking prompts down into smaller problems and allowing the LLM to “check its work” before revealing the response to the end user. That’s not the same as thinking in my opinion, it’s just more complex prompting. It’s not a single intelligence pondering on the prompt, it’s different parts of the model tackling the prompt in different ways before being piped to the full model for a generative reply. A different approach but at the end of the day, it’s just an unthinking pile of silicon and various metals running a computer program.
I do like your analogy of the 7 year old compared to the LLM. I find the main distinction being that the 7 year old will grow and learn form its experience, an LLM can’t. It’s “experience”, through prompt history, can give it additional information to apply to the current prompt, but it’s not really learning as much as it is just another token to help it generate a specific response. LLMs react to prompts according to its programming, emergent and novel responses come from unexpected inputs, not from it learning or otherwise not following its programming.
I apologize I probably didn’t fully address or rebut everything in your post, it was just too good of a post to be able to succinctly address it all on a mobile app. Thanks for sharing your perspective