okr765
  • Communities
  • Create Post
  • Create Community
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
Alas Poor Erinaceus@lemmy.ml to Technology@lemmy.mlEnglish · 3 months ago

‘Sputnik moment’: $1tn wiped off US stocks after Chinese firm unveils AI chatbot

www.theguardian.com

external-link
message-square
198
fedilink
648
external-link

‘Sputnik moment’: $1tn wiped off US stocks after Chinese firm unveils AI chatbot

www.theguardian.com

Alas Poor Erinaceus@lemmy.ml to Technology@lemmy.mlEnglish · 3 months ago
message-square
198
fedilink
Trump calls emergence of DeepSeek a ‘wake-up call’ amid doubts about sustainability of western artificial intelligence boom
  • jacksilver@lemmy.world
    link
    fedilink
    arrow-up
    32
    arrow-down
    6
    ·
    3 months ago

    My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.

    • DeepSeek ~$6million https://www.theregister.com/2025/01/26/deepseek_r1_ai_cot/?td=rt-3a
    • Llama 2 estimated ~$4-5 million https://www.visualcapitalist.com/training-costs-of-ai-models-over-time/

    I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.

    • will_a113@lemmy.ml
      link
      fedilink
      English
      arrow-up
      38
      ·
      3 months ago

      The thing is that R1 is being compared to gpt4 or in some cases gpt4o. That model cost OpenAI something like $80M to train, so saying it has roughly equivalent performance for an order of magnitude less cost is not for nothing. DeepSeek also says the model is much cheaper to run for inferencing as well, though I can’t find any figures on that.

      • jacksilver@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        3
        ·
        3 months ago

        My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.

        Something trained on audio/pictures/videos/text is probably going to cost more than just text.

        But maybe I’m missing something.

        • will_a113@lemmy.ml
          link
          fedilink
          English
          arrow-up
          23
          ·
          3 months ago

          The original gpt4 is just an LLM though, not multimodal, and the training cost for that is still estimated to be over 10x R1’s if you believe the numbers. I think where R 1 is compared to 4o is in so-called reasoning, where you can see the chain of though or internal prompt paths that the model uses to (expensively) produce an output.

          • jacksilver@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            2
            ·
            edit-2
            3 months ago

            I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.

            The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.

            However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.

            Edit: and I think the real money will be in the more complex models focused on workflows automation.

        • WalnutLum@lemmy.ml
          link
          fedilink
          arrow-up
          9
          ·
          3 months ago

          Yea except DeepSeek released a combined Multimodal/generation model that has similar performance to contemporaries and a similar level of reduced training cost ~20 hours ago:

          https://huggingface.co/deepseek-ai/Janus-Pro-7B

          • veroxii@aussie.zone
            link
            fedilink
            arrow-up
            4
            ·
            3 months ago

            Holy smoke balls. I wonder what else they have ready to release over the next few weeks. They might have a whole suite of things just waiting to strategically deploy

    • modulus@lemmy.ml
      link
      fedilink
      arrow-up
      9
      ·
      3 months ago

      One of the things you’re missing is the same techniques are applicable to multimodality. They’ve already released a multimodal model: https://seekingalpha.com/news/4398945-deepseek-releases-open-source-ai-multimodal-model-janus-pro-7b

Technology@lemmy.ml

technology@lemmy.ml

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.ml

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 126 users / day
  • 1.16K users / week
  • 2.29K users / month
  • 7.58K users / 6 months
  • 1 local subscriber
  • 37.9K subscribers
  • 1.81K Posts
  • 18.8K Comments
  • Modlog
  • mods:
  • MinutePhrase@lemmy.ml
  • BE: 0.19.9
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org