Just a guy shilling for gun ownership, tech privacy, and trans rights.

I’m open for chats on mastodon https://hachyderm.io/

my blog: thinkstoomuch.net

My email: nags@thinkstoomuch.net

Always looking for penpals!

  • 9 Posts
  • 170 Comments
Joined 2 years ago
cake
Cake day: December 21st, 2023

help-circle

  • A few months ago now, Arizona? Arkansas maybe? Some state legalized “AI powered” home schooling systems. But it was mostly clickbait and the system is less like ChatGPT and more like the YouTube Algorithm machine learning. It takes into account the stuff that students do well at and let’s them advance beyond “grade level” limitations while also learning how to present problem areas in ways the student responds to.

    I had asked my home schooled AI researcher buddy his thoughts and he obviously liked it. I like the idea too, but my hang up was on socializing kids. That to me is the more important role of schools.

    I wouldn’t trust an LLM in this set up though. A human tutor would still need to step in for questions outside of a FAQ IMO. I love working with an LLM by giving it all the manuals, guides, and config files I used then asking where I went wrong because it can usually give me a good enough interpretation to see where to go next. But that’s just a rubber duck. My mind and skills are developed. A kid learning math for Tue first time can’t do that.












  • From what I understand its not as fast as a consumer Nvdia card but but close.

    And you can have much more “Vram” because they do unified memory. I think the max is 75% of total system memory goes to the GPU. So a top spec Mac mini M4 Pro with 48GB of Ram would have 32gb dedicated to GPU/NPU tasks for $2000

    Compare that to JUST a 5090 32GB for $2000 MSRP and its pretty compelling.

    $200 and its the 64GB model with 2x 4090’s amounts of Vram.

    Its certainly better than the AMD AI experience and its the best price for getting into AI stuff so says nerds with more money and experience than me.


  • From what I understand its not as fast as a consumer Nvdia card but but close.

    And you can have much more “Vram” because they do unified memory. I think the max is 75% of total system memory goes to the GPU. So a top spec Mac mini M4 Pro with 48GB of Ram would have 32gb dedicated to GPU/NPU tasks for $2000

    Compare that to JUST a 5090 32GB for $2000 MSRP and its pretty compelling.

    $200 and its the 64GB model with 2x 4090’s amounts of Vram.

    Its certainly better than the AMD AI experience and its the best price for getting into AI stuff so says nerds with more money and experience than me.




  • The Lenovo Thinkcentre M715q were $400 total after upgrades. I fortunately had 3 32 GB kits of ram from my work’s e-waste bin but if I had to add those it would probably be $550 ish The rack was $120 from 52pi I bought 2 extra 10in shelves for $25 each the Pi cluster rack was also $50 (shit I thought it was $20. Not worth) Patch Panel was $20 There’s a UPS that was $80 And the switch was $80

    So in total I spent $800 on this set up

    To fully replicate from scratch you would need to spend $160 on raspberry pis and probably $20 on cables

    So $1000 theoratically




  • Ollama and all that runs on it its just the firewall rules and opening it up to my network that’s the issue.

    I cannot get ufw, iptables, or anything like that running on it. So I usually just ssh into the PC and do a CLI only interaction. Which is mostly fine.

    I want to use OpenWebUI so I can feed it notes and books as context, but I need the API which isn’t open on my network.