It will be once the bubble pops. Small local tuned models for specific tasks that the user powers are much less expensive for the tech companies than tech companies powering and watering datacenters.
Right now the tech bros genuinely think people will be cool paying hundreds of dollars a month to rent a GPU for all their Internet tasks. AI fatigue is already setting in.
The tech bros’ investors will pull funding once they realize how asinine that is long-term. Probably already starting to, with the likes of Zuck trying to use green charity money to fund his LLMs.
I’m fully expecting the current bubble to pop in the near future as well. The whole war on Iran could serve as a catalyst incidentally given that it’s going to drive energy prices to the moon.
That would be preferable. If ML optimization open sources and progresses greatly that would be good for the little guy
OpenAI/Anthropic is incentivized to prevent this.
They are also big enough and unregulated enough that they could use their power & political/industry relationships to drive up the price of local AI ownership (RAM, GPUs, etc)
I’m not sure of how much they can actually prevent us from just running foss Chinese alternatives locally though
Not for everyone, but they are aiming at increasing hardware ownership costs so more people can’t afford local AI
Exactly, and a lot of big companies in US are heavily reliant on Chinese models already. For example, Airbnb uses Qwen cause they can self host it and customize it. Cursor built their latest composer model on top of Kimi, and so on. There are far more companies using these tools than making them, so while open models hurt companies that want to sell them as a service, they’re lowering the cost for everyone else.
No.
Do elaborate. The tech industry has gone through many cycles of going from mainframe to personal computer over the years. As new tech appears, it requires a huge amount of computing power to run initially. But over time people figure out how to optimize it, hardware matures, and it becomes possible to run this stuff locally. I don’t see why this tech should be any different.




