You can use a wattage meter between your outlet and computer. I’ve tried that, and the usage is around the same as a graphically intensive videogame while it is generating.
There’s a huge difference between a model you can run locally and a chatgpt model.
Sure, but without actually knowing what kind of hardware the servers are running, what kind of software too, and what their service backend looks like we can’t say whether it is going to be higher or lower.
I think we can assume it’s nvidia H200 which peaks at 700W what what I saw on Google. Multiply that by the turnaround time from your prompt to full response and you have a ceiling value. There’s probably some queueing and other delays so in reality the time GPU spends on your query will be much less. If you use the API, it may include the timing information.
Most of the power consumption comes from training and optimising models. You only interact with the finished product, so power per query is very low compared to that required to develop the LLM.
while this is true in isolation, the amount of users means that inference now uses more power than training for the large actors.
The question is about per-prompt, so number of users is not relevant. What may be more relevant is number of tokens in and out.
If anything, number of users will decrease power use per prompt due to economy of scale.
You’re looking for tokens. Prompts are broken down into tokens, which then are used to generate tokens in response. All are represented by large integers. The common metric is tokens/second, and if utilized correctly the GPU should pin at 100% usage while generating tokens. Calculate how many tokens per second it’s generating and how many tokens you’re using, times the wattage per second and you’re good.
sure, hardware wattage × time taken per prompt. which model specifically are you referring to and on what hardware?
Edit:
say, for example, that i’m running a model that takes ten seconds to respond on my Radeon 7900 XTX. it’s power limited to 300W, but the rest of the system also pulls power during runtime so let’s call it 400.
to get watt-hours we take watts times hours. one second is 1/3600th of an hour.
that comes out to 400 × 10 ÷ 3600 ≈ 1.11Wh. so that’s equivalent of leaving a 6W LED light on for about 11 minutes, or an old-fashioned incandescent bulb on for 80 seconds.
accurate
I’d say “no”, if this is from a cloud LLM provider, and you want a lot of precision.
-
There are a number of factors like K-V caching that can affect the computation cost of a given prompt that you aren’t going to have absolute control over.
-
You don’t know where the machine lives that is running the prompt. Cooling is going to be a meaningful contributor to the amount of energy used. Even if an LLM provider wants to give you that information, it’s going to vary to some degree based on, say, ambient temperature.
-
You don’t know what internal changes are being made for hardware settings. Like, IIRC Nvidia GPUs can be run at different power restriction levels. At lower power levels, they will run more efficiently. It could be — not saying that this is being done ATM — that an LLM cloud provider could choose to throttle power usage to reduce their costs when overall load is low.
-
You don’t know what software optimizations are being made.
You might get approximate numbers from a provider, and those might be good enough for your use. Like, if someone just wants to know, say, about how much power generation infrastructure is required, it may not be necessary to be spot-on. And I’m sure that you can put some upper and lower bounds on the real value.
If you’re running an LLM on your own hardware, then you can measure it and constrain the way in which it is computed not to change and such, so then you can probably get values as accurately as you want.
-
There’s no way that I know of to see the per prompt usage for commercially available models. They obviously hide that. I admit I don’t research them much but I am assuming each chip is processing prompts one at a time.
Its pretty simple arithmetic - if it’s running exclusively on a single GPU system, and a prompt takes X seconds to generate on said gpu, then you take the GPUs power over X seconds plus whatever fraction of the datacenter overhead power that gpu uses. For locally run models on your own hardware this is also trivial to calculate.
Alternatively, GPU’s run at a certain number of “tokens” per second and each prompt is a certain number of tokens being fed into the model, generally scaling with the length of prompt.
openai actually released some figures on power use per prompt, but the caveat is that a single prompt to their services can trigger multiple responses (the “thinking” mode) so they’re not consistent.
I’d imagine it depends on the size of the llm. My local llm is about 20gb and pegs the GPU for maybe 5-10 seconds (6700xt 12gb) so you could probably extrapolate from that based on that. I’m sure these giant AI gpus would be more efficient though so maybe not.







