So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.
It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn’t be surprised if trillions).
If you can get to 32b / 80b models, that’s where magic starts to happen.
Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don’t use AI.
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
You are playing with ancient stuff that wasn’t even good at release. Try these:
A 4b model performing like a 30b model: https://huggingface.co/Nanbeige/Nanbeige4.1-3B
Google open source version of Gemini: https://huggingface.co/google/gemma-3-4b-it
Well, not off to a great start.
To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.
Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.
It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn’t be surprised if trillions).
If you can get to 32b / 80b models, that’s where magic starts to happen.
Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don’t use AI.