- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Datacentre-hosted LLM’s have a long way to go to be accurate enough for mass deployment. It looks to me like it will take a miracle of some sort for them to manage it before this bubble pops. It could be decades or more, after all we don’t have a real understanding of how the brain works, so hoping to mimic it now seems a bit premature.
I can see RAG and fine tuning making an LLM accurate enough to be functional, enough for a range of natural language processing computing tasks (with a decent amount of human input that ultimately is used for fine-tuning). But even if just for cost reasons (in RAG’s case), you will surely want your LLM hosted locally. I don’t see a need for that data centre.
Venture capitalists/silicone valley bros might have burned through trillions to do the work to get trained LLM’s useful enough for people to run in their own organisations/at home.
If they were more measured in their approach, I bet it would be better. Instead they’re plopping them everywhere with little regard for what it’ll do the impacted community.



