• ch00f@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    11 hours ago

    GO self-hosted,

    So yours and another comment I saw today got me to dust off an old docker container I was playing with a few months ago to run deepseek-r1:8b on my server’s Intel A750 GPU with 8gb of VRAM. Not exactly top-of-the-line, but not bad.

    I knew it would be slow and not as good as ChatGPT or whatever which I guess I can live with. I did ask it to write some example Rust code today which I hadn’t even thought to try and it worked.

    But I also asked it to describe the characters in a popular TV show, and it got a ton of details wrong.

    8b is the highest number of parameters I can run on my card. How do you propose someone in my situation run an LLM locally? Can you suggest some better models?

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        Well, not off to a great start.

        To be clear, I think getting an LLM to run locally at all is super cool, but saying “go self hosted” sort of gloms over the fact that getting a local LLM to do anything close to what ChatGPT can do is a very expensive hobby.

      • ch00f@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        Any suggestions on how to get these to gguf format? I found a GitHub project that claims to convert, but wondering if there’s a more direct way.

    • Mika@piefed.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      It goes down to number of vram / unified ram you have. There is no magic to make 8b perform like top tier subscription based LLMs (likely in 500b+ range, wouldn’t be surprised if trillions).

      If you can get to 32b / 80b models, that’s where magic starts to happen.

    • Sir. Haxalot@nord.pub
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      9 hours ago

      Honestly you pretty much don’t. Llama are insanely expensive to run as most of the model improvements will come from simply growing the model. It’s not realistic to run LLMs locally and compete with the hosted ones, it pretty much requires the economics of scale. Even if you invest in a 5090 you’re going to be behind the purpose made GPUs with 80GB VRAM.

      Maybe it could work for some use cases but I rather just don’t use AI.