
I had the exact same thing just happen. Took me a good 25+ minutes of searching every image folder on my phone to find this…but the window had long since closed 🫤

Another reason to add verbose alt text on everything you post. That makes it easier to search for a post when you yourself need to find it again in the future.
Ah, but I also make a shit ton of comment memes. Really, I need to organize my shit. At maybe 4000-ish images, kinda daunting now.

You can add alt text to images embedded in comments:

Wait.
Really?

And now, thanks to your alt text, that comment shows up when searching for “this is where the fun begins”. Alt text not only helps those who rely on software like screen readers, but also assists with indexing/searching.

Clearly we need a community repository of memes, searchable by alt-text tags. Who’s up for hosting an immich?
Would it have to be entirely public, so anybody can contribute images, edit tags, etc.? Or does Immich provide some sort of limited submission process for public instances that requires moderator review for each change?
Either? It supports various kinds of authentication. Could use OIDC (though that would require someone to also host an OpenID server and/or use commercial providers like google, probably wouldn’t go down super well with this community) or username/password. I’ll do some research and see if there’s a better option for a semi-public sharing system since Immich is primarily for semi-private use (you have to join the instance to upload things). Some kind of moderation approval would probably be critical to keep illegal content off, at a minimum. There are a lot of options!
There’s got to be an opensource tool for this
Immich?
I assume that allows you to manually tag your images and then search based on tag. That would probably be the main feature for this.
I haven’t gotten around to trying it yet so I don’t know the features. Though I think it also does (local) image recognition to help identify things.
It does indeed support metadata searching. There’s also a contextual search option but that requires an OpenAI product called CLIP.







