• ranzispa@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    8 hours ago

    Do you have a cluster with 10 A100 lying around? Because that’s what it gets to run deepseek. It is open source, but it is far from accessible to run on your own hardware.

    • khepri@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      57 minutes ago

      I run quantized versions on deepseek that are usable enough for chat, and it’s on a home set that is so old and slow by today’s standards I won’t even mention the specs lol. Let’s just say the rig is from 2018 and it wasn’t near the best even back then.

    • DandomRude@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      Yes, that’s true. It is resource-intensive, but unlike other capable LLMs, it is somewhat possible—not for most private individuals due to the requirements, but for companies with the necessary budget.

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 hours ago

        They’re overestimating the costs. 4x H100 and 512GB DDR4 will run the full DeepSeek-R1 model, that’s about $100k of GPU and $7k of RAM. It’s not something you’re going to have in your homelab (for a few years at least) but it’s well within the budget of a hobbyist group or moderately sized local business.

        Since it’s an open weights model, people have created quantized versions of the model. The resulting models can have much less parameters and that makes their RAM requirements a lot lower.

        You can run quantized versions of DeepSeek-R1 locally. I’m running deepseek-r1-0528-qwen3-8b on a machine with an NVIDIA 3080 12GB and 64GB RAM. Unless you pay for an AI service and are using their flagship models, it’s pretty indistinguishable from the full model.

        If you’re coding or doing other tasks that push AI it’ll stumble more often, but for a ‘ChatGPT’ style interaction you couldn’t tell the difference between it and ChatGPT.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          You should be running hybrid inference of GLM Air with a setup like that. Qwen 8B is kinda obsolete.

          I dunno what kind of speeds you absolutely need, but I bet you could get at least 12 tokens/s.