You know why the code is wrong because you have the experience to see where the issue is and what it is.
If you’ve learned coding with LLMs from the start, you won’t acquire the experience needed to be able to tell what is wrong.
I’ve worked with a client that tried to generate code for a HCI bluetooth device, trying to recreate the full Bluetooth stack, instead of picking the right product from the start, with a working stack.
And that was a client that had technical knowledge, just not for Bluetooth and HCI.
And if you try to tell the AI what’s wrong, it will create bullshit code until it kinda works, adding more issues along the way.
I’m sure that AI will replace coders one day, but LLMs aren’t AI and they are neat ready to write decent, complex code.
Fuck man I don’t know how to explain this to people… AI creates code that works… But it’s like a house in stilts… One strong wind and it’s going to crash…
But the pushback is… It’s faster… What about bugs? Tell ai to fix them…
I don’t know what to do… If the apps are low usage low costs… For the end user it’s almost indistinguishable when it’s 5000 lines spaghetti code vs 1000 clean code… They both work… So how do you explain the higher ups that the cleaner code is better long term…
If long term they say ai will be better and faster fixing the bugs it makes today just keep using ai…
I just let the Ai write all my code and spend all my time looking for the bugs it creates.
Its not so fulfilling mentally since I dont think anymore, but sure, goes fast and lots of code is produced.
Also its the new normal now and I think humans can only be faster if they know the domain very well already.
Most of us are working with apis we dont know every method of by heart, so its quite slow to write code manually, even if its correct.
Which means this entire profession will be about fixing Ai bugs by telling it whats wrong and let it retry until it works.
You know why the code is wrong because you have the experience to see where the issue is and what it is.
If you’ve learned coding with LLMs from the start, you won’t acquire the experience needed to be able to tell what is wrong.
I’ve worked with a client that tried to generate code for a HCI bluetooth device, trying to recreate the full Bluetooth stack, instead of picking the right product from the start, with a working stack.
And that was a client that had technical knowledge, just not for Bluetooth and HCI.
And if you try to tell the AI what’s wrong, it will create bullshit code until it kinda works, adding more issues along the way.
I’m sure that AI will replace coders one day, but LLMs aren’t AI and they are neat ready to write decent, complex code.
Fuck man I don’t know how to explain this to people… AI creates code that works… But it’s like a house in stilts… One strong wind and it’s going to crash…
But the pushback is… It’s faster… What about bugs? Tell ai to fix them…
I don’t know what to do… If the apps are low usage low costs… For the end user it’s almost indistinguishable when it’s 5000 lines spaghetti code vs 1000 clean code… They both work… So how do you explain the higher ups that the cleaner code is better long term…
If long term they say ai will be better and faster fixing the bugs it makes today just keep using ai…
Anyone have opinions?
Yeah I know, I think junior developers have a hard time today. Not sure how to learn without writing the code yourself.