Definition of can dish it but can’t take it

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    DeepSeek API isn’t free, and to use Qwen you’d have to sign up for Ollama Cloud or something like that

    To use Qwen, all you need is a decent video card and a local LLM server like LM Studio.

    Local deploying is prohibitive

    There’s a shitton of LLM models in various sizes to fit the requirements of your video card. Don’t have the 256GB VRAM requirements for the full quantized 8-bit 235B Qwen3 model? Fine, get the quantized 4-bit 30B model that fits into a 24GB card. Or a Qwen3 8B Base with DeepSeek-R1 post-trained Q 6-bit that fits on a 8GB card.

    There are literally hundreds of variations that people have made to fit whatever size you need… because it’s fucking open-source!

    • lacaio@mander.xyz
      link
      fedilink
      arrow-up
      1
      ·
      22 hours ago

      Training LLMs is very costly, and open-weights aren’t open-source. For example, there are some LLMs in Brazil, but there is a notable case for a brazilian student on the University of Dusseldorf that banded together with two other students of non-brazilian origin to make a brazilian LLM. 4B model. They used Google to train the LLM, I think, because any training on low VRAM won’t work. It took many days and over $3000 dollars. The name is Tucano.

      I know it looks cheap because there are many, but many country initiatives are eager on AI technology. It’s costly.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        2
        ·
        12 hours ago

        open-weights aren’t open-source.

        This always has been a dumb argument, and really lacks any modicum of practicality. This is rejecting 95% of the need because it is not 100% to your liking.

        As we’ve seen in the text-to-image/video world, you can train on top of base models just fine. Or create LoRAs for specialization. Or change them into various styles of quantized GGUFs.

        Also, you don’t need a Brazilian LLM because all of the LLMs are very multilingual.

        Spending $3000 on training is still really cheap, but depending on the size of the model, you can still get away with training on 24GB or 32GB cards, which cost you the price of the card and energy. LoRAs take almost nothing to train. A university that is worth anything is going to have the resources to train a model like that. None of these arguments hold water.