Last I checked it costed ~$6000 to run a high end LLM at terrible speeds and that was before the RAM price hike. And at the rate things are changing it might be obsolete in a few year. And I don’t have that much money either.
I’m going to stick with the free OpenCode Zen models for now and maybe switch to OpenCode Black or Synthetic or whatever when they stop being available and use the open weight models there for now.
Having just finished getting an entire front end for my website, I disagree. A few years ago I would offshore this job to some third-world country devs. Now, AI can do the same thing, for cents, without having to wait for a few days for the initial results and another day or two for each revision needed
Please enlighten me. I am working on systems solving real-world issues, and now I can ship my solutions faster, with lower costs. Sounds like a win-win for everyone involved except for the offshore employees that have to look for new gigs now
Edit: I would actually rather read a reply than just see you downvoting. The point is, what you call a “glorified text generating machine”, has actual use cases
Looks like others have come along and made my point for me while I slept. Except for calling out the dehumanizing language against the developers. They missed that one.
They weren’t my full time employees but one time gigs. I’ve worked with tons of freelancers over the years, I don’t have any special relationship with them. It’s was just a matter of whoever would offer the lowest price for the gig each time
Yeah, not seeing how their reasoning justifies anything. “I didn’t know them, they’re just a number” is exactly my point. That and calling them “third world devs” is Fox News “they took our jobs” style language
it’s not that it doesn’t have use cases. It burning the world down with it.
how much more water went down the drain cooling the requests you made? how about the electricity not going to local consumers but AI data centers.
all the computer components shortages…
that’s still before the fact you admitted you would have hired a human, and given them food on the table instead of a corporate giant to buy another mega yacht.
Regarding the electricity not going to local customers: it’s not my fault that your country does not have appropriate regulations. None of the data centers are located where I live anyways.
Regarding hiring a human: I 100% would hire a human if there was no AI, you’re right, I’m not trying to hide it. Sucks for them I guess but I don’t see a reason why should I keep using their services if I can get a cheaper and arguably better alternative now - I’m trying to make money, not run a charity supporting development of 3rd world countries with authoritarian regimes.
However, I am pretty sure that the companies which I’m paying are currently operating on a loss with my $3/month and free coding plans. They want to grow the customer base and I’ll just switch once they will start wanting to make a profit, like the Chinese ZAI just did this week, hiking up their prices by over 3x. They are operating on a loss with my <3$/month contribution and not buying another mega yacht yet getting further away from one.
What do you expect me to do? Spend more money when I can get better results with less money? I don’t understand your point, what am I supposed to be doing, according to you?
I don’t know if it’s your fault honestly. It’s the system that makes you want to offshore your work to developing countries and not hire local employees. I get it. It’s cheaper. But when even independent developers start doing this we have reached post-late stage capitalism at this point
It’s behind a login page so I’m afraid you wouldn’t see much. Also, it was never supposed to be glorious (it was not before the LLMs neither), it’s a matter of just having some form of UI as a necessity. I would be hiring actual designers if it was supposed to be a landing page or sth where the looks matter, not stick with the AI
Yeah. It’s not hard to say a sentence about the problem space. Heck, in good faith I’ll say what I’ve been working on to start. I’m currently developing tools to help small communities teach their native language to younger generations as existing programs have stopped support for them.
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.
It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn’t be surprised if trillions).
If you can get to 32b / 80b models, that’s where magic starts to happen.
Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don’t use AI.
Going local is taxing on your hardware that is extremely expensive to replace. Hell, it could soon become almost impossible to replace. I genuinely don‘t recommend it.
Even if you HAVE to use LLMs for some reason, there are free alternatives right now that let Silicon Valley bleed money and they‘re quickly running out of it.
Cancelling any paid subscription probably hurts them more than anything else.
I would, if I found even a remotely good usecase for LLMs. Would be useful for contextual search on a bunch of API documentation and books on algorithms, but I don’t want a sychophantic “copilot” or “assistant”, that does job so bad I would be fired for, all while being called ableist slurs and getting blacklisted from the industry.
Off with their heads! GO self-hosted, go local… toss the rest in the trash can before this crap gets a foothold and fully enshitifies
Last I checked it costed ~$6000 to run a high end LLM at terrible speeds and that was before the RAM price hike. And at the rate things are changing it might be obsolete in a few year. And I don’t have that much money either.
I’m going to stick with the free OpenCode Zen models for now and maybe switch to OpenCode Black or Synthetic or whatever when they stop being available and use the open weight models there for now.
LLMs are already shit. Going local is still burning the world just to run a glorified text production machine
Having just finished getting an entire front end for my website, I disagree. A few years ago I would offshore this job to some third-world country devs. Now, AI can do the same thing, for cents, without having to wait for a few days for the initial results and another day or two for each revision needed
The fact you see nothing wrong with anything you said really speaks volumes to the inhumanity inherent with using “AI”.
Please enlighten me. I am working on systems solving real-world issues, and now I can ship my solutions faster, with lower costs. Sounds like a win-win for everyone involved except for the offshore employees that have to look for new gigs now
Edit: I would actually rather read a reply than just see you downvoting. The point is, what you call a “glorified text generating machine”, has actual use cases
Looks like others have come along and made my point for me while I slept. Except for calling out the dehumanizing language against the developers. They missed that one.
They weren’t my full time employees but one time gigs. I’ve worked with tons of freelancers over the years, I don’t have any special relationship with them. It’s was just a matter of whoever would offer the lowest price for the gig each time
so… that made it okay?
Yeah, not seeing how their reasoning justifies anything. “I didn’t know them, they’re just a number” is exactly my point. That and calling them “third world devs” is Fox News “they took our jobs” style language
you missed the plot a little.
it’s not that it doesn’t have use cases. It burning the world down with it.
how much more water went down the drain cooling the requests you made? how about the electricity not going to local consumers but AI data centers.
all the computer components shortages…
that’s still before the fact you admitted you would have hired a human, and given them food on the table instead of a corporate giant to buy another mega yacht.
Regarding the electricity not going to local customers: it’s not my fault that your country does not have appropriate regulations. None of the data centers are located where I live anyways.
Regarding hiring a human: I 100% would hire a human if there was no AI, you’re right, I’m not trying to hide it. Sucks for them I guess but I don’t see a reason why should I keep using their services if I can get a cheaper and arguably better alternative now - I’m trying to make money, not run a charity supporting development of 3rd world countries with authoritarian regimes.
However, I am pretty sure that the companies which I’m paying are currently operating on a loss with my $3/month and free coding plans. They want to grow the customer base and I’ll just switch once they will start wanting to make a profit, like the Chinese ZAI just did this week, hiking up their prices by over 3x. They are operating on a loss with my <3$/month contribution and not buying another mega yacht yet getting further away from one.
“well it doesn’t effect me directly other than my bottom line, so sucks for everyone else”
if all you have to say is that, have the day you deserve.
What do you expect me to do? Spend more money when I can get better results with less money? I don’t understand your point, what am I supposed to be doing, according to you?
I don’t know if it’s your fault honestly. It’s the system that makes you want to offshore your work to developing countries and not hire local employees. I get it. It’s cheaper. But when even independent developers start doing this we have reached post-late stage capitalism at this point
link to the website please, lets see this glorious beast
It’s behind a login page so I’m afraid you wouldn’t see much. Also, it was never supposed to be glorious (it was not before the LLMs neither), it’s a matter of just having some form of UI as a necessity. I would be hiring actual designers if it was supposed to be a landing page or sth where the looks matter, not stick with the AI
Yeah and I wonder what are those real world products solving real world issues he talks about.
Yeah. It’s not hard to say a sentence about the problem space. Heck, in good faith I’ll say what I’ve been working on to start. I’m currently developing tools to help small communities teach their native language to younger generations as existing programs have stopped support for them.
Edit: I’m mocking Lodespawn VieuxQueb tbc
So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.
I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.
But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.
8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?
You are playing with ancient stuff that wasn’t even good at release. Try these:
A 4b model performing like a 30b model: https://huggingface.co/Nanbeige/Nanbeige4.1-3B
Google open source version of Gemini: https://huggingface.co/google/gemma-3-4b-it
Well, not off to a great start.
To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.
Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.
It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn’t be surprised if trillions).
If you can get to 32b / 80b models, that’s where magic starts to happen.
Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.
Maybe it could work for some use cases but I rather just don’t use AI.
Going local is taxing on your hardware that is extremely expensive to replace. Hell, it could soon become almost impossible to replace. I genuinely don‘t recommend it.
Even if you HAVE to use LLMs for some reason, there are free alternatives right now that let Silicon Valley bleed money and they‘re quickly running out of it.
Cancelling any paid subscription probably hurts them more than anything else.
It’s not really taxing on your hardware unless you load and unload huge models all day or if your cooling is insufficient.
If LLM is tied to making you productive, going local is about owning and controlling the means of production.
You aren’t supposed to run it on machine you work on anyway, do a server and send requests.
I would, if I found even a remotely good usecase for LLMs. Would be useful for contextual search on a bunch of API documentation and books on algorithms, but I don’t want a sychophantic “copilot” or “assistant”, that does job so bad I would be fired for, all while being called ableist slurs and getting blacklisted from the industry.