I initially subscribed to ChatGPT because I got a job as the only devops guy at an organization, when I had very limited devops experience, and ChatGPT essentially served as my mentor. I justified keeping it for a long time because it helped my productivity; bugs that I had no idea where to start with could be worked through given a few hours (or days) of back-and-forth.
As I climbed the learning curve, ChatGPT became proportionally less helpful, but I kept it because it’s kind of useful for rubber ducky debugging. I did find Copilot to be pretty handy for writing docstrings (especially for keeping consistent formatting conventions), but the actual code completions were more annoying than anything.
When all was said and done, I cancelled my ChatGPT and Copilot subscriptions because I’m taking on a mortgage tomorrow and I literally just can’t afford them. I have Ollama running on my homelab server, but I only have enough vRAM for a 7B-param model, and it kind of sucks ass, but whatever. At the end of the day, I like using my brain.
UPDATE (because I just thought of it after posting): I do think that “AI-as-a-mentor” is a good use-case of AI. It really helped me cut my teeth on the basics of Linux. I often find that it’s easier to learn when you have a working example of code or config that you can dissect than to bash your head against the wall just trying to figure out how to get something to run at all in the first place.
For my birthday challenge this year, I’m learning how to read and write Devanagari as a surprise to my Indian grandparents. I asked my local qwen model to generate some worksheets for me to practice with, and it totally flopped. It gave away all the answers. I do think ChatGPT would have done better, but maybe I could have gotten sufficient results with a better GPU.
I initially subscribed to ChatGPT because I got a job as the only devops guy at an organization, when I had very limited devops experience, and ChatGPT essentially served as my mentor. I justified keeping it for a long time because it helped my productivity; bugs that I had no idea where to start with could be worked through given a few hours (or days) of back-and-forth.
As I climbed the learning curve, ChatGPT became proportionally less helpful, but I kept it because it’s kind of useful for rubber ducky debugging. I did find Copilot to be pretty handy for writing docstrings (especially for keeping consistent formatting conventions), but the actual code completions were more annoying than anything.
When all was said and done, I cancelled my ChatGPT and Copilot subscriptions because I’m taking on a mortgage tomorrow and I literally just can’t afford them. I have Ollama running on my homelab server, but I only have enough vRAM for a 7B-param model, and it kind of sucks ass, but whatever. At the end of the day, I like using my brain.
UPDATE (because I just thought of it after posting): I do think that “AI-as-a-mentor” is a good use-case of AI. It really helped me cut my teeth on the basics of Linux. I often find that it’s easier to learn when you have a working example of code or config that you can dissect than to bash your head against the wall just trying to figure out how to get something to run at all in the first place.
For my birthday challenge this year, I’m learning how to read and write Devanagari as a surprise to my Indian grandparents. I asked my local qwen model to generate some worksheets for me to practice with, and it totally flopped. It gave away all the answers. I do think ChatGPT would have done better, but maybe I could have gotten sufficient results with a better GPU.