• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    4
    ·
    22 hours ago

    accurate

    I’d say “no”, if this is from a cloud LLM provider, and you want a lot of precision.

    • There are a number of factors like K-V caching that can affect the computation cost of a given prompt that you aren’t going to have absolute control over.

    • You don’t know where the machine lives that is running the prompt. Cooling is going to be a meaningful contributor to the amount of energy used. Even if an LLM provider wants to give you that information, it’s going to vary to some degree based on, say, ambient temperature.

    • You don’t know what internal changes are being made for hardware settings. Like, IIRC Nvidia GPUs can be run at different power restriction levels. At lower power levels, they will run more efficiently. It could be — not saying that this is being done ATM — that an LLM cloud provider could choose to throttle power usage to reduce their costs when overall load is low.

    • You don’t know what software optimizations are being made.

    You might get approximate numbers from a provider, and those might be good enough for your use. Like, if someone just wants to know, say, about how much power generation infrastructure is required, it may not be necessary to be spot-on. And I’m sure that you can put some upper and lower bounds on the real value.

    If you’re running an LLM on your own hardware, then you can measure it and constrain the way in which it is computed not to change and such, so then you can probably get values as accurately as you want.