okr765
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
☆ Yσɠƚԋσʂ ☆@lemmy.ml to Technology@lemmy.mlEnglish · 21 days ago

Huawei enters the GPU market with 96 GB VRAM GPU under 2000 USD, meanwhile NVIDIA sells from 10,000+ (RTX 6000 PRO)

www.alibaba.com

external-link
message-square
44
fedilink
78
external-link

Huawei enters the GPU market with 96 GB VRAM GPU under 2000 USD, meanwhile NVIDIA sells from 10,000+ (RTX 6000 PRO)

www.alibaba.com

☆ Yσɠƚԋσʂ ☆@lemmy.ml to Technology@lemmy.mlEnglish · 21 days ago
message-square
44
fedilink
Huawei's New Atlas 300i Duo 96g Deepseek Ai Gpu Server Inference Card Fan Cooler Video Acceleration Graphic Card For Workstation - Buy Atlas 300i Duo 96g huaweis Gpu server Gpu hua Wei Gpu Server hua Wei Gpu new Huawei Gpu Card fan-cooled Graphics Card workstation Graphics Card Product on Alibaba.com
www.alibaba.com
external-link
Huawei's New Atlas 300i Duo 96g Deepseek Ai Gpu Server Inference Card Fan Cooler Video Acceleration Graphic Card For Workstation - Buy Atlas 300i Duo 96g huaweis Gpu server Gpu hua Wei Gpu Server hua Wei Gpu new Huawei Gpu Card fan-cooled Graphics Card workstation Graphics Card Product on Alibaba.com
  • nutbutter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    5
    ·
    20 days ago

    You can train or fine-tune a model on any GPU. Surely, It will be slower, but higher VRAM is better.

    • geneva_convenience@lemmy.ml
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      20 days ago

      No. The CUDA training stuff is Nvidia only.

      • herseycokguzelolacak@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        20 days ago

        Pytorch runs on HIP now.

        • geneva_convenience@lemmy.ml
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          20 days ago

          AMD has been lying about that every year since 2019.

          Last time I checked it didn’t. And it probably still doesn’t.

          People aren’t buying NVIDIA if AMD would work too. The VRAM prices NVIDIA asks are outrageous.

          • herseycokguzelolacak@lemmy.ml
            link
            fedilink
            arrow-up
            2
            ·
            19 days ago

            I run llama.cpp and PyTorch on MI300s. It works really well.

            • geneva_convenience@lemmy.ml
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              19 days ago

              Can you train on it too? I tried Pytorch on AMD once and it was awful. They promised mountains but delivered nothing. Newer activation functions were all broken.

              llama.cpp is inference only, for which AMD works great too after converting to ONNX. But training was awful on AMD in the past.

              • herseycokguzelolacak@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                19 days ago

                We have trained transformers and diffusion models on AMD MI300s, yes.

                • geneva_convenience@lemmy.ml
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  19 days ago

                  Interesting. So why does NVIDIA still hold such a massive monopoly on the datacenter?

                  • herseycokguzelolacak@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    18 days ago

                    It takes a long time for large companies to change their purchases. Many of these datacenter contracts are locked in for years. You can’t just change them overnight.

Technology@lemmy.ml

technology@lemmy.ml

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.ml

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 94 users / day
  • 1.12K users / week
  • 2.62K users / month
  • 7.41K users / 6 months
  • 1 local subscriber
  • 39.8K subscribers
  • 2.47K Posts
  • 21.6K Comments
  • Modlog
  • mods:
  • MinutePhrase@lemmy.ml
  • BE: 0.19.9
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org