• sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 hours ago

    GO self-hosted, go local

    Last I checked it costed ~$6000 to run a high end LLM at terrible speeds and that was before the RAM price hike. And at the rate things are changing it might be obsolete in a few year. And I don’t have that much money either.

    I’m going to stick with the free OpenCode Zen models for now and maybe switch to OpenCode Black or Synthetic or whatever when they stop being available and use the open weight models there for now.