No, thank you. Sorry, never.
Not only that, but the huge probability of mistakes is just deafening. The last time I used LLM was in 2023 someone recommended for a task at paper work, and I got a literal headache in 10 minutes… Since then I never ever will use that sorrow for anything that is not for blackbox pentesting or experimental unverified data generated you may find in medicine or military isolated solutions.
That deafening feel that every single bit of output from that LLM or void machine may contain a mistake no soul is accountable for to ask about… A generated bit of someone’s work you just cannot verify since no source nor human is available… How would you trace the rationale that resulted in the output shown?
Faster? Is that so… Doesn’t verification of every output require even more time to test it and consider stable, to prove it is correct, to stay accountable for the knowledge and actions you perform as a developer, artist, researcher… human?
Your mind is to be trained to do a research, remember, and do not depend on someone’s service to a level of predominance/replacement.
Meanwhile, effort, passion, creativity, empathy, and love, in turn, you carry, supports in long-term.
You may not care now, though, but you do you. It’s your mind and memory you develop.



It’s rather sad when such extensions don’t refer to their source code in descriptions.
Meanwhile, Google Chrome extensions are to be updated automatically.
Sorry, I won’t install it, since I have no trust in the source of it, and I care about my time to not re-check it on every update.