Nice … I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.
This particular graph is because a lot of people freaked out over “AI draining oceans” that’s why the original paper (I’ll look for it when I have time, I have a exam tomorrow. Fucking higher ed man) made this graph
This is actually misleading in the other direction: ChatGPT is a particularly intensive model. You can run a GPT-4o class model on a consumer mid to high end GPU which would then use something in the ballpark of gaming in terms of environmental impact.
You can also run a cluster of 3090s or 4090s to train the model, which is what people do actually, in which case it’s still in the same range as gaming. (And more productive than 8 hours of WoW grind while chugging a warmed up Nutella glass as a drink).
Models like Google’s Gemma (NOT Gemini these are two completely different things) are insanely power efficient.
I didn’t even say which direction it was misleading, it’s just not really a valid comparison to compare a single invocation of an LLM with an unrelated continuous task.
You’re comparing Volume of Water with Flow Rate.
Or if this was power, you’d be comparing Energy (Joules or kWh) with Power (Watts)
Maybe comparing asking ChatGPT a question to doing a Google search (before their AI results) would actually make sense.
I’d also dispute those “downloading a file” and other bandwidth related numbers. Network transfers are insanely optimized at this point.
I can’t really provide any further insight without finding the damn paper again (academia is cooked) but Inference is famously low-cost, this is basically “average user damage to the environment” comparison, so for example if a user chats with ChatGPT they gobble less water comparatively than downloading 4K porn (at least according to this particular paper)
As with any science, statistics are varied and to actually analyze this with rigor we’d need to sit down and really go down deep and hard on the data. Which is more than I intended when I made a passing comment lol
The absolute most monstrous, energy guzzling model tested needed 10 MW of power to train.
Most models need less than that, and non-frontier models can even be trained on gaming hardware with comparatively little energy consumption.
That paper by the way says there is a 2.4x increase YoY for model training compute, BUT that paper doesn’t mention DeepSeek, which rocked the western AI world with comparatively little training cost (2.7 M GPU Hours in total)
Some companies offset their model training environmental damage with renewable and whatever bullshit, so the actual daily usage cost is more important than the huge cost at the start (Drop by drop is an ocean formed - Persian proverb)
The west is lowering its co2 output while India is slurping up all the co2 we’re saving:
This doesn’t include China of course, the most egregious of the co2 emitters
AI is not even a tiny blip on that radar, especially as AI is in data centres and devices which runs on electricity so the more your country goes to renewables the less co2 impacting it is over time
I’ve been think about this for a while. Consider how quick LLM’s are.
If the amount of energy spent powering your device (without an LLM), is more than using an LLM, then it’s probably saving energy.
In all honesty, I’ve probably saved over 50 hours or more since I starred using it about 2 months ago.
Coding has become incredibly efficient, and I’m not suffering through search-engine hell any more.
Edit:
Lemmy users when someone uses AI: noooo, you can’t generate helpful answers to your questions which cost a tenth of a cent worth of electricity.
Also Lemmy users when they see someone consuming the electric power of an entire nuclear power plant just to play Doom The Dark Ages on their $20,000 PC: neat!
What kind of code are you writing that your CPU goes to sleep? If you follow any good practices like TDD, atomic commits, etc, and your code base is larger than hello world, your PC will be running at its peak quite a lot.
Example: linting on every commit + TDD. You’ll be making loads of commits every day, linting a decent code base will definitely push your CPU to 100% for a few seconds. Running tests, even with caches, will push CPU to 100% for a few minutes. Plus compilation for running the app, some apps take hours to compile.
In general, text editing is a small part of the developer workflow. Only junior devs spend a lot of time typing stuff.
Except that half the time I dont know what the fuck on doing. It’s normal for me to spend hours trying to figure out why a small config file isnt working.
That’s not just text editing, that’s browsing the internet, referring to YouTube videos, or wallowing in self-pity.
It sounds like it does save you a lot of time then. I haven’t had the same experience, but I did all my learning to program before LLMs.
Personally I think the amount of power saved here is negligible, but it would actually be an interesting study to see just how much it is. It may or may not offset the power usage of the LLM, depending on how many questions you end up asking and such.
It doesn’t always get the answers right, and I have to re-feed its broken instructions back into itself to get the right scripts, but for someone with no official coding training, this saves me so much damn time.
Consider I’m juggling learning Linux starting from 4 years ago, along with python, rust, nixos, bash scripts, yaml scripts, etc.
It’s a LOT.
For what it’s worth, I dont just take the scripts and paste them in, I’m always trying to understand what the code does, so I can be less reliant as time goes on.
Nice … I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.
All the while as we roast to death because all of this will take more resources than the entire energy output of a medium sized country.
I will cite the scientific article later when I find it, but essentially you’re wrong.
water != energy, but i’m actually here for the science if you happen to find it.
This particular graph is because a lot of people freaked out over “AI draining oceans” that’s why the original paper (I’ll look for it when I have time, I have a exam tomorrow. Fucking higher ed man) made this graph
Asking ChatGPT a question doesn’t take 1 hour like most of these… this is a very misleading graph
This is actually misleading in the other direction: ChatGPT is a particularly intensive model. You can run a GPT-4o class model on a consumer mid to high end GPU which would then use something in the ballpark of gaming in terms of environmental impact.
You can also run a cluster of 3090s or 4090s to train the model, which is what people do actually, in which case it’s still in the same range as gaming. (And more productive than 8 hours of WoW grind while chugging a warmed up Nutella glass as a drink).
Models like Google’s Gemma (NOT Gemini these are two completely different things) are insanely power efficient.
I didn’t even say which direction it was misleading, it’s just not really a valid comparison to compare a single invocation of an LLM with an unrelated continuous task.
You’re comparing Volume of Water with Flow Rate. Or if this was power, you’d be comparing Energy (Joules or kWh) with Power (Watts)
Maybe comparing asking ChatGPT a question to doing a Google search (before their AI results) would actually make sense. I’d also dispute those “downloading a file” and other bandwidth related numbers. Network transfers are insanely optimized at this point.
I can’t really provide any further insight without finding the damn paper again (academia is cooked) but Inference is famously low-cost, this is basically “average user damage to the environment” comparison, so for example if a user chats with ChatGPT they gobble less water comparatively than downloading 4K porn (at least according to this particular paper)
As with any science, statistics are varied and to actually analyze this with rigor we’d need to sit down and really go down deep and hard on the data. Which is more than I intended when I made a passing comment lol
What about training an AI?
According to https://arxiv.org/abs/2405.21015
The absolute most monstrous, energy guzzling model tested needed 10 MW of power to train.
Most models need less than that, and non-frontier models can even be trained on gaming hardware with comparatively little energy consumption.
That paper by the way says there is a 2.4x increase YoY for model training compute, BUT that paper doesn’t mention DeepSeek, which rocked the western AI world with comparatively little training cost (2.7 M GPU Hours in total)
Some companies offset their model training environmental damage with renewable and whatever bullshit, so the actual daily usage cost is more important than the huge cost at the start (Drop by drop is an ocean formed - Persian proverb)
Actually if you think about it AI might help climate change become an actual catastrophe.
It is already!
we’re rolling out renewables at like 100x the rate of ai electricity use, so no need to worry there
Yeah, at this rate we’ll be just fine. (As long as this is still the Reagan administration.)
yep the biggest worry isn’t AI, it’s India
https://www.worldometers.info/co2-emissions/india-co2-emissions/
The west is lowering its co2 output while India is slurping up all the co2 we’re saving:
This doesn’t include China of course, the most egregious of the co2 emitters
AI is not even a tiny blip on that radar, especially as AI is in data centres and devices which runs on electricity so the more your country goes to renewables the less co2 impacting it is over time
Could you add the US to the graphs, as EU and West are hardly synonymous - even as it descends into Trumpgardia.
China has that massive rate because it manufactures for the US, the US itself is a huge polluter for military and luxury NOT manufacturing
Still the second largest CO2 emitter, so it’d make sense to put it on for the comparison.
I’ve been think about this for a while. Consider how quick LLM’s are.
If the amount of energy spent powering your device (without an LLM), is more than using an LLM, then it’s probably saving energy.
In all honesty, I’ve probably saved over 50 hours or more since I starred using it about 2 months ago.
Coding has become incredibly efficient, and I’m not suffering through search-engine hell any more.
Edit:
Lemmy users when someone uses AI: noooo, you can’t generate helpful answers to your questions which cost a tenth of a cent worth of electricity.
Also Lemmy users when they see someone consuming the electric power of an entire nuclear power plant just to play Doom The Dark Ages on their $20,000 PC: neat!
Just writing code uses almost no energy. Your PC should be clocking down when you’re not doing anything. 1GHz is plenty for text editing.
Does ChatGPT (or whatever LLM you use) reduce the number of times you hit build? Because that’s where all the electricity goes.
What kind of code are you writing that your CPU goes to sleep? If you follow any good practices like TDD, atomic commits, etc, and your code base is larger than hello world, your PC will be running at its peak quite a lot.
Example: linting on every commit + TDD. You’ll be making loads of commits every day, linting a decent code base will definitely push your CPU to 100% for a few seconds. Running tests, even with caches, will push CPU to 100% for a few minutes. Plus compilation for running the app, some apps take hours to compile.
In general, text editing is a small part of the developer workflow. Only junior devs spend a lot of time typing stuff.
Except that half the time I dont know what the fuck on doing. It’s normal for me to spend hours trying to figure out why a small config file isnt working.
That’s not just text editing, that’s browsing the internet, referring to YouTube videos, or wallowing in self-pity.
That was before I started using gpt.
It sounds like it does save you a lot of time then. I haven’t had the same experience, but I did all my learning to program before LLMs.
Personally I think the amount of power saved here is negligible, but it would actually be an interesting study to see just how much it is. It may or may not offset the power usage of the LLM, depending on how many questions you end up asking and such.
It doesn’t always get the answers right, and I have to re-feed its broken instructions back into itself to get the right scripts, but for someone with no official coding training, this saves me so much damn time.
Consider I’m juggling learning Linux starting from 4 years ago, along with python, rust, nixos, bash scripts, yaml scripts, etc.
It’s a LOT.
For what it’s worth, I dont just take the scripts and paste them in, I’m always trying to understand what the code does, so I can be less reliant as time goes on.
Are you using your PC less hours per day?
Yep, more time for doing home renovations.
We’re racing towards the Blackwall from Cyberpunk 2077…
Already there. The blackwall is AI-powered and Markov chains are most definitely an AI technique.