When AI is actually invented I’ll call it AI. Right now we have a steroid juiced parrot that’s based on old school machine learning. Its great at summarizing simple data, but terrible at real tasks.
This is more people who aren’t dumb telling the marketing teams to stop hyping something that doesn’t exist. The dot com boom is echoing. The profit will never materialize.
But the profit absolutely can materialize because it is useful.
Right now the problem is hardware / data center costs, but those can come down at a per user level.
They just need to make it useful enough within those cost constants which is 100% without a doubt possible, it’s just a matter of can they do it before they run out of money.
Edit: for example, nvidia giving OpenAI hardware for ownership helps bring down their costs, which gives them a longer runway to find that sweet spot.
The current machine learning models (AI for the stupid) rely on input data, which is running out.
Processing power per watt is stagnating. Moors law hasn’t been true for years.
Who will pay for these services? The dot com bubble destroyed everyone who invested in it. Those that “survived” sprouted off of the corpse of that recession. LLMs will probably survive, but not in the way you assume.
Nvidia helping openAI survive is a sign that the bubble is here and ready to blow.
Thats part of the equation, but there is still a lot of work that can be done to optimize the usage of the llms themselves, and the more optimized and refined they are, the cheaper it becomes to run, and you can also use even bigger datasets that weren’t feasible before.
I think there’s also a lot of room to still optimize the data in the data set. Ingesting the entire worlds information doesn’t lead to the best output, especially if you’re going into something more factual vs creative like a LLM trained to assist with programming in a specific language.
And people ARE paying for it today, OpenAI has billions in revenue, the problem is the hardware is so expensive, the data centeres needed to run it are also expensive. They need to continue optimizing things to narrow that gap. Open AI charges $20 USD/month for their base paid plan. They have millions of paying customers, but millions isn’t enough to offset their costs.
So they can
reduce costs so millions is enough
make it more useful so they can gain more users.
This is so early that they have room to both improve 1 and 2.
But like I said, they (and others like them) need to figure that out before they run out of money and everything falls apart and needs to be built back up in a more sustainable way.
We won’t know if they can or can’t until they do it, or it pops.
I’ve worked on data centers monitoring power consumption, we need to stop calling LLM power sinks the same thing as data centers. Its basically whitewashing the power sucking environmental disasters that they are.
Machine learning is what you are describing. LLMs being puppeted as AI is destructive marketing and nothing more.
LLMs are somewhat useful at dumb tasks and they do a pretty dumb job at it. They feel like when I was new at my job and for decades could produce mediocre bullshit, but I was too naive to know it sucked. You can’t see how much they suck yet because you lack experience in the areas you use them in.
Your two cost saving points are pulled from nowhere just like how LLM inference works.
It is unlikely to turn a profit because the returns need to be greater than the investment for there to be any profit. The trends show that very few want to pay for this service. I mean, why would you pay for something that’s the equivalent of asking someone online or in person for free or very little cost by comparison?
Furthermore, it’s a corporation that steals from you and doesn’t want to be held accountable for anything. For example, the chat bot suicides and the fact that their business model would fall over if they actually had to pay for the data that they use to train their models.
The whole thing is extremely inefficient and makes us more dumb via atrophy. Why would anyone want to third party their thinking process? It’s like thinking everyone wants mobility scooters.
These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay…
The money is there, they just need to optimize the LLMs to run more efficiently (this is continually progressing), and the hardware side work on reducing hardware costs as well (including electricity usage / heat generation). If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.
I’m not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.
just need to optimize.
Like they haven’t just been trying for years already with, again, incredibly marginal returns that continue to diminish with each optimization.
It’s not easy to solve because its not possible to solve. ML has been around since before computers, it’s not magically going to get efficient. The models are already optimized.
Revenue isn’t profit. These companies are the biggest cost sinks ever.
Heating a single building is a joke marketing tactic compared to the actual energy impact these LLM energy sinks have.
I’m an automation engineer, LLMs suck at anything cutting edge. Its basically a mainstream knowledge reproducer with no original outputs. Meaning it can’t do anything that isnt already done.
Why on earth do you think things can’t be optimized on the LLM level?
There are constant improvements being made there, they are not in any way shape or form fully optimized yet. Go follow the /r/LocalLlama sub for example and there’s constant breakthroughs happening, and then a few months later you see a LLM utilizing them come out, and they’re suddenly smaller, or you can run a larger model on smaller memory footprint, or you can get a larger context on the same hardware etc.
This is all so fucking early, to be so naive or ignorant to think that they’re as optimized as they can get is hilarious.
I’ll take a step back. These LLM models are interesting. They are being trained in interesting new ways. They are becoming more ‘accurate’, I guess. ‘Accuracy’ is very subjective and can be manipulated.
Machine learning is still the same though.
LLMs still will never expand beyond their inputs.
My point is it’s not early anymore. We are near or past the peak of LLM development. The extreme amount of resources being thrown at it is the sign that we are near the end.
That sub should not be used to justify anything, just like any subreddit at any point in time.
Improved efficiency would reduce the catastrophic energy demands LLMs will have in the future. Assuming your reality comes true it would help reduce their environmental impact.
We’ll see. This isn’t first “it’s the future” technology I’ve seen and I’m barely 40.
not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.
… That’s all in your head, mate. I never said that nor did I imply it.
What I am implying is that the uptake is so small compared to the investment that it is unlikely to turn a profit.
If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.
😐
I’ve worked in the building industry for over 20 years. This is simply not feasible both from a material standpoint and physics standpoint.
I know it’s an example, but this kind of rhetoric is exactly the kind of wishful thinking that I see in so many people who want LLMs to be a main staple of our everyday lives. Scratch the surface and it’s all just fantasy.
You > the trends show that very few want to pay for this service.
Me > These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay
Me > … but you’re making it sound no one wants it
You > … That’s all in your head, mate. I never said that nor did I imply it.
Pretty sure it’s not all in my head.
The heat example was just one small example of things these large data centers (not just AI ones) can do to help lower costs, and they are a real thing that are being considered. It’s not a solution to their power hungry needs, but it is a small step forward on how we can do things better.
This system “allows us to cover between 50% and 70% of the hospital’s heating demand, and save up to 4,000 tons of CO2 per year,” he said, also noting that “there are virtually no heat losses” since “the connecting pipe is quite short.”
And how, pray tell, will doing all of that return a profit?
I’m from Australia, so I can only speak to the Australian climate and industry. I can confidently say that the model shown in Vienna is not feasible in our country. We simply don’t have much use for excess heat and we are highly susceptible to droughts. DCs use a lot of water to cool down and having these all over the country for private enterprise is bonkers. So, that’s instantly a market that isn’t profitable. Furthermore, it’s not feasible to build a pipe and re-route the heat across large distances with minimal heat loss.
However, even when or if they implement this throughout all of Austria, it won’t return a profit (which is what I thought your attachment was here, not the feasibility. We are talking about profitability, right?). This project cost $3.5m Euro and partially funded by tax. It’s not a great example of profitability but a good example of sustainability measures.
Also, reading comprehension assistance: not feasible != Impossible.
Australia isn’t the greatest spot to run a data centre in general in terms of heat, but I do understand the need for sovereign data centres, so this obviously can’t work everywhere.
What makes you think $3.5 million can’t be profitable? A mid sized hospitals heating bill can get into the many hundreds of thousands or into the millions even. Especially if it’s in a colder environment. A 5-6 year payback on that wouldn’t be terrible and would be worth an upfront investment. Even a 10 year payback isn’t terrible.
These colder locations are the ideal locations for the data centres in the first place because they generally want a cooler climate to begin with, so they will gravitate to them when possible.
Edit: And if you build a data centre with this ability to recoup heat, you could start building further commercial things in the area and keep the heat redistribution very close. You don’t need to travel very long distances. You do need to put some thought into where they go through and whats around or will be built around.
Ok. We’re deviating off the point of LLM profitability here and have driven this conversation off into the weeds. So I’ll make this one last comment, and then I’m done. This debate has been interesting but exhausting.
Final counterpoints:
$3.5mil is the cost of the connection footed by the energy provider and tax payer, and provides no ROI to investors like NVIDIA, hence no profit to LLM and “AI” in general.
As far as I can tell, the biggest method of external income for LLM companies are subscriptions and there is simply not enough uptake in subscriptions to get ROI, so they try to force consumers to use it which ends up pushing away your customer base since you’re taking away their power of choice.
For them to obtain ROI, literally the entire planet needs to use it which isn’t feasible because, as a consumer, you need income to consume and the larger driver of investment into LLMs is to reduce the cost of labour.
LLMs have long since gone beyond the scope of interesting science project to something driven by pure parasitic greed.
When AI is actually invented I’ll call it AI. Right now we have a steroid juiced parrot that’s based on old school machine learning. Its great at summarizing simple data, but terrible at real tasks.
This is more people who aren’t dumb telling the marketing teams to stop hyping something that doesn’t exist. The dot com boom is echoing. The profit will never materialize.
But the profit absolutely can materialize because it is useful.
Right now the problem is hardware / data center costs, but those can come down at a per user level.
They just need to make it useful enough within those cost constants which is 100% without a doubt possible, it’s just a matter of can they do it before they run out of money.
Edit: for example, nvidia giving OpenAI hardware for ownership helps bring down their costs, which gives them a longer runway to find that sweet spot.
The current machine learning models (AI for the stupid) rely on input data, which is running out.
Processing power per watt is stagnating. Moors law hasn’t been true for years.
Who will pay for these services? The dot com bubble destroyed everyone who invested in it. Those that “survived” sprouted off of the corpse of that recession. LLMs will probably survive, but not in the way you assume.
Nvidia helping openAI survive is a sign that the bubble is here and ready to blow.
Thats part of the equation, but there is still a lot of work that can be done to optimize the usage of the llms themselves, and the more optimized and refined they are, the cheaper it becomes to run, and you can also use even bigger datasets that weren’t feasible before.
I think there’s also a lot of room to still optimize the data in the data set. Ingesting the entire worlds information doesn’t lead to the best output, especially if you’re going into something more factual vs creative like a LLM trained to assist with programming in a specific language.
And people ARE paying for it today, OpenAI has billions in revenue, the problem is the hardware is so expensive, the data centeres needed to run it are also expensive. They need to continue optimizing things to narrow that gap. Open AI charges $20 USD/month for their base paid plan. They have millions of paying customers, but millions isn’t enough to offset their costs.
So they can
This is so early that they have room to both improve 1 and 2.
But like I said, they (and others like them) need to figure that out before they run out of money and everything falls apart and needs to be built back up in a more sustainable way.
We won’t know if they can or can’t until they do it, or it pops.
None of this is true.
I’ve worked on data centers monitoring power consumption, we need to stop calling LLM power sinks the same thing as data centers. Its basically whitewashing the power sucking environmental disasters that they are.
Machine learning is what you are describing. LLMs being puppeted as AI is destructive marketing and nothing more.
LLMs are somewhat useful at dumb tasks and they do a pretty dumb job at it. They feel like when I was new at my job and for decades could produce mediocre bullshit, but I was too naive to know it sucked. You can’t see how much they suck yet because you lack experience in the areas you use them in.
Your two cost saving points are pulled from nowhere just like how LLM inference works.
lol
Oh ffs … stfu.
It is unlikely to turn a profit because the returns need to be greater than the investment for there to be any profit. The trends show that very few want to pay for this service. I mean, why would you pay for something that’s the equivalent of asking someone online or in person for free or very little cost by comparison?
Furthermore, it’s a corporation that steals from you and doesn’t want to be held accountable for anything. For example, the chat bot suicides and the fact that their business model would fall over if they actually had to pay for the data that they use to train their models.
The whole thing is extremely inefficient and makes us more dumb via atrophy. Why would anyone want to third party their thinking process? It’s like thinking everyone wants mobility scooters.
These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay…
The money is there, they just need to optimize the LLMs to run more efficiently (this is continually progressing), and the hardware side work on reducing hardware costs as well (including electricity usage / heat generation). If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.
I’m not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.
Pretty obvious you’re not reading Zitron et al.
just need to optimize. Like they haven’t just been trying for years already with, again, incredibly marginal returns that continue to diminish with each optimization.
Derp.
It’s not easy to solve because its not possible to solve. ML has been around since before computers, it’s not magically going to get efficient. The models are already optimized.
Revenue isn’t profit. These companies are the biggest cost sinks ever.
Heating a single building is a joke marketing tactic compared to the actual energy impact these LLM energy sinks have.
I’m an automation engineer, LLMs suck at anything cutting edge. Its basically a mainstream knowledge reproducer with no original outputs. Meaning it can’t do anything that isnt already done.
Why on earth do you think things can’t be optimized on the LLM level?
There are constant improvements being made there, they are not in any way shape or form fully optimized yet. Go follow the /r/LocalLlama sub for example and there’s constant breakthroughs happening, and then a few months later you see a LLM utilizing them come out, and they’re suddenly smaller, or you can run a larger model on smaller memory footprint, or you can get a larger context on the same hardware etc.
This is all so fucking early, to be so naive or ignorant to think that they’re as optimized as they can get is hilarious.
I’ll take a step back. These LLM models are interesting. They are being trained in interesting new ways. They are becoming more ‘accurate’, I guess. ‘Accuracy’ is very subjective and can be manipulated.
Machine learning is still the same though.
LLMs still will never expand beyond their inputs.
My point is it’s not early anymore. We are near or past the peak of LLM development. The extreme amount of resources being thrown at it is the sign that we are near the end.
That sub should not be used to justify anything, just like any subreddit at any point in time.
I think we’re just going to have to agree to disagree on this part.
I’ll agree though that IF what you’re saying is true, then they won’t succeed.
Fair enough. I’d be fine being wrong.
Improved efficiency would reduce the catastrophic energy demands LLMs will have in the future. Assuming your reality comes true it would help reduce their environmental impact.
We’ll see. This isn’t first “it’s the future” technology I’ve seen and I’m barely 40.
Yep, I am. Just follow the money. Here’s an example:
https://www.theregister.com/2025/10/29/microsoft_earnings_q1_26_openai_loss/
… That’s all in your head, mate. I never said that nor did I imply it.
What I am implying is that the uptake is so small compared to the investment that it is unlikely to turn a profit.
😐
I’ve worked in the building industry for over 20 years. This is simply not feasible both from a material standpoint and physics standpoint.
I know it’s an example, but this kind of rhetoric is exactly the kind of wishful thinking that I see in so many people who want LLMs to be a main staple of our everyday lives. Scratch the surface and it’s all just fantasy.
You > the trends show that very few want to pay for this service.
Me > These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay
Me > … but you’re making it sound no one wants it
You > … That’s all in your head, mate. I never said that nor did I imply it.
Pretty sure it’s not all in my head.
The heat example was just one small example of things these large data centers (not just AI ones) can do to help lower costs, and they are a real thing that are being considered. It’s not a solution to their power hungry needs, but it is a small step forward on how we can do things better.
https://www.bbc.com/news/articles/cew4080092eo
Edit: Another that is in use: https://www.itbrew.com/stories/2024/07/17/inside-the-data-center-that-heats-up-a-hospital-in-vienna-austria
And how, pray tell, will doing all of that return a profit?
I’m from Australia, so I can only speak to the Australian climate and industry. I can confidently say that the model shown in Vienna is not feasible in our country. We simply don’t have much use for excess heat and we are highly susceptible to droughts. DCs use a lot of water to cool down and having these all over the country for private enterprise is bonkers. So, that’s instantly a market that isn’t profitable. Furthermore, it’s not feasible to build a pipe and re-route the heat across large distances with minimal heat loss.
However, even when or if they implement this throughout all of Austria, it won’t return a profit (which is what I thought your attachment was here, not the feasibility. We are talking about profitability, right?). This project cost $3.5m Euro and partially funded by tax. It’s not a great example of profitability but a good example of sustainability measures.
Also, reading comprehension assistance: not feasible != Impossible.
Australia isn’t the greatest spot to run a data centre in general in terms of heat, but I do understand the need for sovereign data centres, so this obviously can’t work everywhere.
What makes you think $3.5 million can’t be profitable? A mid sized hospitals heating bill can get into the many hundreds of thousands or into the millions even. Especially if it’s in a colder environment. A 5-6 year payback on that wouldn’t be terrible and would be worth an upfront investment. Even a 10 year payback isn’t terrible.
These colder locations are the ideal locations for the data centres in the first place because they generally want a cooler climate to begin with, so they will gravitate to them when possible.
Edit: And if you build a data centre with this ability to recoup heat, you could start building further commercial things in the area and keep the heat redistribution very close. You don’t need to travel very long distances. You do need to put some thought into where they go through and whats around or will be built around.
Ok. We’re deviating off the point of LLM profitability here and have driven this conversation off into the weeds. So I’ll make this one last comment, and then I’m done. This debate has been interesting but exhausting.
Final counterpoints:
LLMs have long since gone beyond the scope of interesting science project to something driven by pure parasitic greed.