I’m a software developer in Germany and work for a small company.

I’ve always liked the job, but I’m getting annoyed recently about the ideas for certain people…

My boss (who has some level of dev experience) uses “vibe coding” (as far as I know, this means less human review and letting an LLM produce huge code changes in very short time) as a positive word like “We could probably vibe-code this feature easily”.

Someone from management (also with some software development experience) makes internal workshops about how to use some self-built open-code thing with “memory” and advanced thinking strategies + planning + whatever that is connected to many MCP servers, a vector DB, has “skills”, a higher token limit, etc. Surprisingly, the people visiting the workshops (also many developers, but not only) usually end up being convinced by it and that it improved their efficiency a lot and writing that they will use it and that it changed their perspective.

Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!

I see Microsoft announcing that 30% of code is written by AI which is advertisement in my opinion and an attempt to pressure companies to subscribe to OpenAI. Now, my company seems to not even target that, but target the 100%???

To be clear: I see some potential for AI in software development. Auto-completions, location a bug in a code base, writing prototypes, etc. “Copilot” is actually a good word, because it describes the person next to the pilot. I don’t think, the technology is ready for what they are attempting (being the pilot). I saw the studies questioning how much the benefit of AI actually is.

For sure, one could say “You are just a developer fearing to lose their job / lose what they like to do” and maybe, that’s partially true… AI has brought a lot of change. But I also don’t want to deal with a code base that was mainly written by non-humans in case the non-humans fail to fix the problem…

My current strategy is “I use AI how and when ->I<- think that it’s useful”, but I’m not sure how much longer that will work…

Similar experiences here? What do you suggest? (And no, I’m currently not planning to leave. Not bad enough yet…).

  • PetteriPano@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    4
    ·
    edit-2
    20 hours ago

    Agentic use of AI didn’t really work well enough until December of last year. The models and tools just improve that fast. Codex/claude (or opencode with the same top models) is what you’d need for it.

    You still need to plan and define clear specifications for the model. Spend 80% of your time planning and breaking down the job into steps and it’ll be pretty self-going from there.

    Of course, this works best for common frameworks and solved problems or logical problems. React/node developers can easily 10x their output, and get it done better than they would by hand.

    I’m working more with empirical development, so most of my time goes into studying environments and adapting to it. I get most benefit out of having agents read through logs and figure out what happened. It gets it right maybe half the time, but it’s a good rubber ducky even when it goes wrong. I’d say it 2-3xes my output. But I can probably improve my usage, too.

    But yeah, code review is where it hurts. If it’s slop, it just takes so many rounds to get it right. Even when it’s good, it’s just so much code to review.