I disagree. I used to spend most of my time writing code, fixing syntax, thinking through how to structure the code, looking up documentation on how to use a library.
Now I first discuss with an AI Agent or ChatGPT to write a thorough spec before handing it off to an agent to code it. I don’t read every line. Instead, I thoroughly test the outcome.
Bugs that the AI agent would write, I would have also wrote. Example is unexpected data that doesn’t match expectations. Can’t fault the AI for those bugs.
I also find that the AI writes more bug free code than I did. It handles cases that I wouldn’t have thought of. It used best practices more often than I did.
Maybe I was a bad dev before LLMs but I find myself producing better quality applications much quicker.
> Example is unexpected data that doesn’t match expectations. Can’t fault the AI for those bugs.
I don't understand, how can you not fault AI for generating code that can't handle unexpected data gracefully? Expectations should be defined, input validated, and anything that's unexpected should be rejected. Resilience against poorly formatted or otherwise nonsensical input is a pretty basic requirement.
I hope I severely misunderstood what you meant to say because we can't be having serious discussions about how amazing this technology is if we're silently dropping the standards to make it happen.
yeah you're spot on - the whole "can't fault AI for bugs" mindset is exactly the problem. like, if a junior dev shipped code that crashed on malformed input we'd send it back for proper validation, why would we accept worse from AI? I keep seeing this pattern where people lower their bar because the AI "mostly works" but then you get these silent failures or weird edge case explosions that are way harder to debug than if you'd just written defensive code from the start. honestly the scariest bugs aren't the ones that blow up in your face, it's the ones that slip through and corrupt data or expose something three deploys later
> Now I first discuss with an AI Agent or ChatGPT to write a thorough spec before handing it off to an agent to code it. I don’t read every line. Instead, I thoroughly test the outcome.
This is likely the future.
That being said: "I used to spend most of my time writing code, fixing syntax, thinking through how to structure the code, looking up documentation on how to use a library.".
If you are spending a lot of time fixing syntax, have you looked into linters? If you are spending too much time thinking about how to structure the code, how about spending some days coming up with some general conventions or simply use existing ones.
If you are getting so much productivity from LLMs, it is worth checking if you were simply unproductive relative to your average dev in the first place. If that's the case, you might want to think, what is going to happen to your productivity gains when everyone else jumps on the LLM train. LLMs might be covering for your unproductivity at the code level, but you might still be dropping the ball in non-code areas. That's the higher level pattern I would be thinking about.
I was a good dev but I did not love the code itself. I loved the outcome. Other devs would have done better on leetcode and they would have produced better code syntax than me.
I’ve always been more of a product/business person who saw code as a way to get to the end goal.
That elite coder who hates talking to business people and who cares more about the code than the business? Not me. I’m the opposite.
Hence, LLMs have been far better for me in terms of productivity.
> I’ve always been more of a product/business person who saw code as a way to get to the end goal.
That’s what code always is. A description on how the computer can help someone faster to the end goal. Devs care a little more about the description, because end goals change and rewriting the whole thing from scratch is costly and time-consuming.
> That elite coder who hates talking to business people and who cares more about the code than the business? Not me. I’m the opposite.
I believe that coder exists only in your imagination. All the good ones I know are great communicators. Clarity of thought is essential to writing good code.
I believe that coder exists only in your imagination. All the good ones I know are great communicators. Clarity of thought is essential to writing good code.
I don't think so. These coders exist everywhere. Plenty of great coders are great at writing the code itself but not at the business aspects. Many coders simply do not care about the business or customers part. To them, the act of coding and producing quality code and the process of writing software is the goal. IE. These people are most likely to decline building a feature that customers and the business desperately need because it might cause the code base to become harder to maintain. These people will also want to refactor more than building new features. In the past, these people had plenty of value. In the era of LLMs, I think these people have less value than business/product oriented devs.
> Many coders simply do not care about the business or customers part.
These coders may exist, but they are in my experience not that common. Most coders do care about the business or customers part, but think very differently about these aspects than business people, and thus come to very different conclusions how to handle these topics.
In my experience, it's rather exactly these programmers who are often in conflict with business people
- because they care about such topics
- because they come to different conclusions than the business people, and
- because these programmers care so much about these business-related topics, they are very vocal and sometimes confrontative with their opinions.
In other words: coders who barely care about these business-related aspects are often much easier to handle for business-minded people.
> > That elite coder who hates talking to business people and who cares more about the code than the business? Not me. I’m the opposite.
> I believe that coder exists only in your imagination. All the good ones I know are great communicators. Clarity of thought is essential to writing good code.
Clarity of thought does not make you a good communicator with respect to communicating with business people. People, for example, say about me that I am really good at communicating to people who are in deep love of research, but when I present arguments of similar clarity to business people, my often somewhat abstract considerations typically go over their heads.
You have way more trust in test suites than I do. How complex is the code you’re working with? In my line of work most serious bugs surface in complex interactions between different subsystems that are really hard to catch in a test suite. Additionally in my experience the bugs AI produces are completely alien. You can have perfect code for large functions and then somewhere in the middle absolutely nonsensical mistakes. Reviewing AI code is really hard because you can’t use your normal intuitions and really have to check everything meticulously.
A great lot of thinking about the code, which you can only do if you’re very familiar with it. Writing the code is trivial. I spend nearly all my work hours thinking about edge cases.
If I vibe coded an app without looking at every line, I'm very familiar with how the app works and should work. It's just a different level of abstraction. It doesn't mean I'm not thinking about edge cases if I'm vibe coding. In fact, I might think about them more since I will have more time to think about them without having to write the code.
Now I first discuss with an AI Agent or ChatGPT to write a thorough spec before handing it off to an agent to code it. I don’t read every line. Instead, I thoroughly test the outcome.
Bugs that the AI agent would write, I would have also wrote. Example is unexpected data that doesn’t match expectations. Can’t fault the AI for those bugs.
I also find that the AI writes more bug free code than I did. It handles cases that I wouldn’t have thought of. It used best practices more often than I did.
Maybe I was a bad dev before LLMs but I find myself producing better quality applications much quicker.