I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.


Old enough to remember how people made these same arguments about writing in anything but assembly, using garbage collection, and so on. Technology moves on, and every time there’s a new way to do things people who invested time into doing things the old way end up being upset. You’re just doing moral panic here.
It’s also very clear that you haven’t used these tools yourself, and you’re just making up a straw man workflow that is divorced from reality.
Meanwhile, your bonus point has nothing to do with technology itself. You’re complaining about how capitalism works.
All the technologies you listed behave deterministically, or at least predictably enough that we generally don’t have to worry about surprises from that abstraction layer. Technology does not just move on, practitioners need to actually find it practical beyond their next project that satisfies the shareholders.
Again, you’re discussing tools you haven’t actually used and you clearly have no clue how they work. If you had, then you would realize that agents can work against tests, which act as a contract they fill. I use these tools on daily basis and I have no idea what these surprises you’re talking about are. As a practitioner, I find these things plenty practical.
I’ve literally integrated LLMs into a materials optimizations routine at Apple. It’s dangerous to assume what strangers do and do not know.