• 0 Posts
  • 48 Comments
Joined 2 years ago
cake
Cake day: January 17th, 2024

help-circle


  • A “do everything” app is overkill. I am not a fan of many features Discord implemented over time. But the initial offering of having text chat, voice chat, video chat in one app makes sense. It’s just super convenient to switch the communication type depending on what you are currently doing, without having to onboard and switch between tools.

    It’s also hard to draw a line, if you want to go “do one thing well”. Mumble also includes text chat, and user management, ACLs, etc. … for text chat one could use IRC, for user management there are IdPs, and so on. XMPP also doesn’t just do “one thing”. The “X” (= extensible) is heavily used and there are extensions for all kinds of things. Some of the big messengers out there are (or were) using XMPP under the hood (just without federation).



  • Yes, experience matters a lot. I think the comparison of an coding agent being like a trainee is somewhat appropriate. Leave them to their own devices, and you likely don’t get something you should be shipping to production. But guide them appropriately, and they are helpful. The difference obviously is, that a trainee learns, an agent not so much. At least not on an abstract level. Of course the more code you have, the more patterns they can then copy. But if your baseline rots, so will the code the agent derives from that baseline.



  • Aside from fundamentalists the usage of LLMs and coding agents will increase. It’s a tool in the toolbox now and many devs do or will play around with it. Some will have to learn to not overdo it; but that’s nothing new and a lot of fancy technologies or frameworks along the way caused some disruptions because people jumped on the hype train without applying some caution or critical thinking; but that evens out after a while.

    Might be we see a big drop in usage when costs increase, but it’s also very very possible that the many technological advances we currently make (hardware to run models becoming more streamlined and the models themselves being tuned more and more) will mean, that we indeed reach a point where this can be done comparatively cheap and maybe even local (to some degree) without having to take out a loan.

    I wouldn’t say “managed by LLM” though, just because you spot (partially) agent written commits. It’s hard to judge from the outside how much knowledge the maintainer puts into it. There a big band between vibe coding and fully manual coding. And if we are honest, even “fully manual” is a flexible term (does code completion count? does looking at stack overflow count? does looking at other implementations count? using a search engine?).

    The world is changing, for better or worse. But cut devs some slack and let them get used to the tools. (And to re-iterate that: bad quality and bugs were a thing before agents as well. It just took longer.)




  • Thanks for that long answer. I agree completely with the second half of it. I also agree with most of the first half of it, but I have to add a remark to it:

    My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.

    That is mostly true, but also depends on the usage. You don’t have to tell an agent to “develop feature X” and then go for a coffee. You can issue relatively narrow scoped prompts that yield small amounts of changes/code which are far easier to review. You can work that way in small iterations, making it completely possible to follow along and adjust small things instead of getting a big ball of mud to entangle.

    And while it’s true that not everyone is able to vet code, that was also true before and without coding agents. Yet people run random curl-piped-to-bash commands they copy from some website because it says it will install whatever. They install something from flathub without looking at the source (not even talking about chain of trust for the publishing process here). There is so much bad code out there written by people who are not really good engineers but who are motivated enough to put stuff together. They also made and make ugly mistakes that are hard to spot and due to bad code quality hard to review.

    The main risk of agents is, that they also increase the speed of these developers which means they pump out even more bad code. But the underlying issue existed before and agents don’t automatically mean something is bad. That would also be dangerous to believe that, because that might enforce even more the feeling of security when using a piece of code that was (likely) written without any AI influence. But that’s just not true; this code could be as harmful or even more harmful. You simply don’t know if you don’t review it. And as you said: most people don’t.


  • What you’re taking issue with though is deeper than ai. It’s online discourse that is so rude and nuance-less.

    I guess that’s a fair assessment. It’s just recently quite annoying that we have tons of AI-hate, age-restriction-FUD, etc., while at the same time war rages, the economy goes to shit, and more and more governments turn right-wing or outright fascist.

    We have so many problems, yet we rip each others throat out for topics that are ultimately irrelevant.

    But no, he was a dick about it and is now hiding his use of ai moving forward.

    I am with you that his last sentence was completely stupid. I am not with you regarding the “hiding” part. I was actually surprised there even were commits marked by claude. The way I use agents is typically completely local, then I review each diff, adjust as necessary and then commit. The commit is then obviously by me; not claude or whatever agent I am using at the time. I am pretty sure a lot of people work that way. So I actually think the default is to not see the involvement of AI. And I don’t do this to hide anything … that’s just a consequence of the workflow and how git works and I didn’t even consider that this should be done any differently.

    That’s why I also understand his point - that he shouldn’t have said so bluntly: if that marker was never there, probably no one would have noticed to begin with.


  • Depends. If you are generally careful about what products/projects you use and audit them, and you notice that the owner has horrible code hygiene, bad dependency management, etc., then sure. But why judge them for the tools they use? You can still audit the result the same way. And if you notice that code hygiene and dependencies suck, does it matter if they suck because the author mis-used coding agents, because they simply didn’t give a damn, or because they are incapable of doing any better?

    You’ve likely stumbled on open source repos in the past where you rolled your eyes after looking into them. At least I have. More than once. And that was long long before we had coding agents. I’ve used software where I later saw the code and was suprised this ever worked. Hell, I’ve found old code of myself where I wondered why this ever worked and what the fuck I’ve been smoking back then.

    It’s ok to consider agent usage a red flag that makes you look closer at the code. But I find it unfair to dismiss someones work or abilities just because they use an agent, without even looking at what they (the author, ultimately) produce. And by produce I don’t mean the final binary, but their code.


  • Ok maybe I mis-use the word. If that’s the case, sorry about that. But I hope my point comes across anyway: I really really dislike that the community (or multiple communities, even) get split between people who are ok with AI and who are against AI. This is, IMO, completely unnecessary. That doesn’t mean everyone should be ok with it, but we should not judge or condemn each other because of a different opinion on the matter.

    If you notice a project goes downhill, it’s fine to criticize the author (or the whole project) for the degredation in quality. If there are strong indicators that AI is involved, by all means leave a snarky remark about that while complaining. But ultimately it’s the fuckup of a human.