I have great success with local LLM in some of my workflows and automation.
I use it for my line completion and basic functions/asks while developing that I dont want to waste tokens on.
I also use it in automation. I run my own media server with a few dozen people with an automated request system “jellyseerr” that adds content. I have automation that leverages local LLM to look at recent media requests and automatically requests content that is similar to it.
Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for… let’s say better predictive text… it’s already somewhat AI based I just would like it to be. Better
The diminishing returns are kind of insane if you compare the performance and hardware requirements of a 7b and 100b model. In some cases the smaller model can even perform better because it’s more focused and won’t be as subtle about its hallucinations.
Something is going to have to fundamentally change before we see any big improvements, because I don’t see scaling it up further ever producing AGI or even solving any of the hallucinations/ logic errors it makes.
In some ways it’s a bit like the Crypto blockchain speculators saying it’s going to change the world. But in reality the vast majority of applications proposed would have been better implemented with a simple centralized database.
A local LLM is still an LLM… I don’t think it’s gonna be terribly useful no matter how good your hardware is
I have great success with local LLM in some of my workflows and automation.
I use it for my line completion and basic functions/asks while developing that I dont want to waste tokens on.
I also use it in automation. I run my own media server with a few dozen people with an automated request system “jellyseerr” that adds content. I have automation that leverages local LLM to look at recent media requests and automatically requests content that is similar to it.
Local AI can be useful. But I would rather see nice implementations that used small but brilliantly tuned models for… let’s say better predictive text… it’s already somewhat AI based I just would like it to be. Better
The diminishing returns are kind of insane if you compare the performance and hardware requirements of a 7b and 100b model. In some cases the smaller model can even perform better because it’s more focused and won’t be as subtle about its hallucinations.
Something is going to have to fundamentally change before we see any big improvements, because I don’t see scaling it up further ever producing AGI or even solving any of the hallucinations/ logic errors it makes.
In some ways it’s a bit like the Crypto blockchain speculators saying it’s going to change the world. But in reality the vast majority of applications proposed would have been better implemented with a simple centralized database.