• Rhaedas@fedia.io
    link
    fedilink
    arrow-up
    15
    ·
    7 hours ago

    Unforunately the latest stuff I’ve seen is all about keeping character consistency, which is basically having a fixed frame of reference for every generation. What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details? Perhaps overall it is faster per frame to generate a 2D image vs. tracking all the polys.

    Not saying which is right to do, there’s lots of baggage with discussing AI stuff, just wondering about the actual tech itself.

    • knightly the Sneptaur@pawb.social
      link
      fedilink
      English
      arrow-up
      18
      ·
      edit-2
      5 hours ago

      What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details?

      It’s not, DLSS5 takes a frame as rendered normally by your GPU and feeds it into a second $3k GPU to run the AI image transformer.

      There is no performance benefit, in fact it adds a bit of latency to the process.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 hours ago

        And Nvidia claims it will release without the need for a second 50 series card this year.

        Lots of bullshit being laid out by Nvidia here.

    • Eggymatrix@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      Polys is not where expensive computation is. The bottleneck is raytracing and volumetric fog etc. All those things that make a game look more real and natural.

      I think this dlss stuff could potentially substitute raytracing and other light/shadow/reflections/transparency things that are very expensive to both program correctly and calculate every frame.

      My two cents

      • egregiousRac@piefed.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 hours ago

        Lighting is the space image gen struggles in most now. Individual areas will show convincing shadows, atmosphere, etc, but motivation and consistency is lacking. The shots from Hogwarts Legacy show that really clearly. Slice out a random 10%x10% chunk of the frame and the lighting looks more realistic, but the overall frame loses the directional lighting driven by real things in the scene.

        • paraphrand@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 hours ago

          I’m curious how well it handles lighting from unseen light sources that otherwise didn’t contribute as much to the scene as they should have. In other words, off screen lights that shine into the scene but are not fully rendered by traditional means. Same thing goes for reflections.

          I expect a lot of nonsense being hallucinated in those areas.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        5 hours ago

        I try to avoid the overhyped and wrongly used term AI, so what’s the proper term? Related to diffusion models? Something different?