I’m a software developer in Germany and work for a small company.
I’ve always liked the job, but I’m getting annoyed recently about the ideas for certain people…
My boss (who has some level of dev experience) uses “vibe coding” (as far as I know, this means less human review and letting an LLM produce huge code changes in very short time) as a positive word like “We could probably vibe-code this feature easily”.
Someone from management (also with some software development experience) makes internal workshops about how to use some self-built open-code thing with “memory” and advanced thinking strategies + planning + whatever that is connected to many MCP servers, a vector DB, has “skills”, a higher token limit, etc. Surprisingly, the people visiting the workshops (also many developers, but not only) usually end up being convinced by it and that it improved their efficiency a lot and writing that they will use it and that it changed their perspective.
Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!
I see Microsoft announcing that 30% of code is written by AI which is advertisement in my opinion and an attempt to pressure companies to subscribe to OpenAI. Now, my company seems to not even target that, but target the 100%???
To be clear: I see some potential for AI in software development. Auto-completions, location a bug in a code base, writing prototypes, etc. “Copilot” is actually a good word, because it describes the person next to the pilot. I don’t think, the technology is ready for what they are attempting (being the pilot). I saw the studies questioning how much the benefit of AI actually is.
For sure, one could say “You are just a developer fearing to lose their job / lose what they like to do” and maybe, that’s partially true… AI has brought a lot of change. But I also don’t want to deal with a code base that was mainly written by non-humans in case the non-humans fail to fix the problem…
My current strategy is “I use AI how and when ->I<- think that it’s useful”, but I’m not sure how much longer that will work…
Similar experiences here? What do you suggest? (And no, I’m currently not planning to leave. Not bad enough yet…).
Remind them that copyright cannot be enforced on anything AI-written.
I try to push on the maintenance aspect. Developing something new is easy, and my company does do that, but the group I’m in is primarily doing maintenance on existing software. Bug fixes, feature additions, etc. If we generate applications entirely using LLMs, none of us will be experts on the applications we push to the customers.
They push corpo buzzwords like “responsibility”, but who takes responsibility when no one has done the work to begin with? It feels like a liability nightmare, and the idea of sitting there cleaning slopcode just isn’t very appealing to me.
AI is inevitable in many fields but as usual people expect way too much from it. It’s a tool, not a magic wand. I agree with you that it is useful and even powerful when used by somebody that understands when it is useful. But it is dangerous when wielded by somebody that doesn’t. As others have said, let your boss vibe code themselves into a corner and leave them to vibe code themselves out of it. They will try to deflect that it is your job to solve it though so you better come up with a strategy to handle that. Be sure to have more people on your side in this venture.
Secondarily, install a chatbot with the instructions to derive essence as a bullet list from your boss wall of texts. If they make their life easier with LLM, so can you. If there are misunderstandings it’s either all just ghosts in the machine or a failure from your boss to communicate clearly.
For centuries, we spent less effort consuming content than it took to produce the content.
Good teammates and content producers understand that their content needs to have an intrinsic value and benefits beyond the mere existence of content.
If you make your team 10x slower by sending LLM generated 10 page content instead of a one liner, you are actively hindering the team.
Efficiency is not just production of content (slop or not), but the overall system. That’s why the corpo speak has always been such a waste. Too many words to say nothing in a mandatory all-hands. Now the dial is up to 11 with the same time waste everywhere
Our internal slack channels contain more and more AI-written posts, which makes me think: Thank you for throwing this wall of text on me and n other people. Now, n people need to extract the relevant information, so you are able to “save time” not writing the text yourself. Nice!!!
I think this is one of your best bets as far as getting a real policy change. Bring it up, mention that posts like that may take less time to “write”, but that they’re almost always obnoxiously verbose, contain paragraphs that say essentially nothing, and take far longer to read than a hand-typed message would. The argument that one person is saving time at the expense of dozens (?) of people losing time may carry a lot of weight, especially if these bosses are in and read the same Slack channel.
Past that I’d just let things go as they are, and take every opportunity to point out when AI made a problem, or made a problem more difficult to solve (while downplaying human-created problems).
I had a manager who pushed AI a lot. When he left, all the pressure to use it seemed to die down. So maybe it’s just a couple of people creating this environment and if you can get away with avoiding them it’s better.
The problem with AI code we saw is that often no human has actually looked at it. During reviews you won’t check every line and you’ll have to trust much of the code that seems to do obvious things. But that assumes it was written by a human you also trust. When that human hasn’t reviewed the code either, you end up with code no one in the company has seen (and may not even know how it works).
Your entire comment echoes my thoughts. Things aren’t exactly improved by the idea of adding LLMs to the review process either. Gods.
I am making similar experiences, but is is not as bad as you are describing it yet. We have a new member in the team who is not a developer by himself, but he has gotten the task to make our way of working more professional (we are mainly scientists and not primarily software engineers, so that’s a good thing).
His first task was to create programming guidelines and standards. He created 8 pages of LLM generated text and example non sense code. He honestly put a lot of effort in it, but of course there are a lot of things in it that are wrong. But the worst thing is the wall of text. You are nailing it - it is my task now to go through this whole thing and extract the relevant information. It sucks. And I am afraid that soon I will need to review more and more low quality MRs generated by people who have little experience in programming.
We had a dev drop a combined total of 8,300 lines of readme files into the code base over a weekend. I want to nuke all of them, my boss suggests reviewing and updating them.
Fixing vibe code is a specialty that contractors will be able to charge a premium for here pretty soon.
enjoy the ride, look for a new job, so you’re not surprised when the buisness dies due to declining quality or a customer lawsuit suing for damages.
The best case for AI is that can heroicly solve problems that shouldn’t exist in a first place in a well-managed company.
It can structure unstructured data, that should be structured. It can do simple coding where aboslutly no coder could be reached. It can sum up the red tape and produce red tape to meet complience requirement that were where created for the sake of the red tape alone.
If the work is important, it shoudn’t be done by AI. If its busy work, it shouldn’t exist in a first place. No one would need AI to summarize anything if everything was writen clearly.
And thats best case scenario when you don’t count halucynations, errors, edge cases, company specyfic context - things that results in task taking more time after all the “oh it produced something and it looks good at a glance” cloud fall down.
If your company is getting a lot of value from AI than your company is trash.
Let him “vibe-code” himself into a problem, then tell him you can’t fix the mess he done did.
I haven’t had a similar experience yet, but maybe some if your collegues feel the same way? You could write a letter stating your concerns and let anyone sign who agrees and then send it to your manager. Also, I’d like to add that under German law AI Output can not be copyrighted. You can only claim coownership or something. Maybe that could be interedting to your managers?
My teamleads too. They tried to sell a whole application that they vibecoded and marketik strategie was “fully ai generated!”
If i saw a fully AI generated software i would cower in fear (for my computer safety) and run away
Is this free Software to use for everyone? Would that mean no Code at all was made by the Company? It’s all stolen from everywhere.
Endure the next year or so, until it pops and there will be a massive need for senior devs for fixing the slop machine.
A small company this level of cooked and immature leadership may not have the resources to recover long term from the damage. Even if the bubble pops on them in a year.
I’d start looking for alternatives now, before it becomes urgent.
Let me give you a little parable.
There once was a juggler, who could juggle with three balls all day. Then someone from the audience threw a fourth ball, and he kept going. Someone threw a glass, then a flaming torch, and he kept going, occasionally burning his hands. Seeing he could do it, someone throws a machete, and the juggler almost never cuts his fingers keeping all those things in the air. A chainsaw gets added, and an open bottle of bleach, and occasionally the juggler gets his hair caught or spills some bleach, but he keeps going. As he keeps going, people keep adding more and more things. Eventually it’s too much, and it all comes crashing down, killing the juggler and several members of the audience and destroying all the objects in the air.
On the next street corner, a juggler stands with three balls. Someone from the audience throws in a fourth. He steps aside and lets it fall to the floor, happily juggling three balls.
I fail to see the relevance. OP is not talking about burn-out…
The point was that the more you keep compensating for other people’s dumb moves, the greater the damage when it all inevitably comes crashing down.
In other words, just do what they ask, get them to sign off on it and watch it crash and burn in an unmaintainable, unsecured mess
Reading a wall of text to extract a simple concept which turns it to be wrong seems very appropriate for this thread, just perhaps not in the way they intended.
Agentic use of AI didn’t really work well enough until December of last year. The models and tools just improve that fast. Codex/claude (or opencode with the same top models) is what you’d need for it.
You still need to plan and define clear specifications for the model. Spend 80% of your time planning and breaking down the job into steps and it’ll be pretty self-going from there.
Of course, this works best for common frameworks and solved problems or logical problems. React/node developers can easily 10x their output, and get it done better than they would by hand.
I’m working more with empirical development, so most of my time goes into studying environments and adapting to it. I get most benefit out of having agents read through logs and figure out what happened. It gets it right maybe half the time, but it’s a good rubber ducky even when it goes wrong. I’d say it 2-3xes my output. But I can probably improve my usage, too.
But yeah, code review is where it hurts. If it’s slop, it just takes so many rounds to get it right. Even when it’s good, it’s just so much code to review.









