You can use a wattage meter between your outlet and computer. I’ve tried that, and the usage is around the same as a graphically intensive videogame while it is generating.
Sure, but without actually knowing what kind of hardware the servers are running, what kind of software too, and what their service backend looks like we can’t say whether it is going to be higher or lower.
I think we can assume it’s nvidia H200 which peaks at 700W what what I saw on Google. Multiply that by the turnaround time from your prompt to full response and you have a ceiling value.
There’s probably some queueing and other delays so in reality the time GPU spends on your query will be much less. If you use the API, it may include the timing information.
You can use a wattage meter between your outlet and computer. I’ve tried that, and the usage is around the same as a graphically intensive videogame while it is generating.
There’s a huge difference between a model you can run locally and a chatgpt model.
Sure, but without actually knowing what kind of hardware the servers are running, what kind of software too, and what their service backend looks like we can’t say whether it is going to be higher or lower.
I think we can assume it’s nvidia H200 which peaks at 700W what what I saw on Google. Multiply that by the turnaround time from your prompt to full response and you have a ceiling value. There’s probably some queueing and other delays so in reality the time GPU spends on your query will be much less. If you use the API, it may include the timing information.