Hiya,
Recently upgraded my server to an i5-12400 CPU, and have neen wanting to push my server a bit. Been looking to host my own LLM tasks and workloads, such as building pipelines to scan open-source projects for vulnerabilities and insecure code, to mention one of the things I want to start doing. Inspiration for this started after reading the recent scannings of the Curl project.
Sidenote: I have no intention of swamping devs with AI bugreports, i will simply want to scan projects that i personally use to be aware of its current state and future changes, before i blindly update apps i host.
What budget friendly GPU should i be looking for? Afaik VRAM is quite important, higher the better. What other features do i need to be on the look out for?


The budget friendly AI GPUs are in the shelf right next to the unicorn pen.
Ooh do they have any magic beans? Im looking to trade a cow for some
<giggle> I’ve self hosted a few of the bite sized LLM. The thing that’s keeping me from having a full blown, self hosted AI platform is my little GeForce 1650 just doesn’t have the ass to really do it up right. If I’m going to consult with AI, I want the answers within at least 3 or 4 minutes, not hours. LOL
Quite so. The cheapest card that I’d put any kind of real AI workload on is the 16GB Radeon 9060XT. That’s not what I would call budget friendly, which is why I consider a budget friendly AI GPU to be a mythical beast.
It would be super cool tho, to have a server dedicated to a totally on premise AI with no connectivity to external AI. I’m just not sure whether I can justify several thousand dollars of equipment. Because if I did, true to my nature, I’d want to go all in.