I play games to experience a crafted piece of artistry. DLSS 5 offers none of this, instead replacing the paintbrush held in a human hand with an AI slopping a big vat of oil over the canvas. What are we doing here?
I don’t see how this will stay consistent enough for art directors to sign off on it. It’s effectively just a hallucination based on your current video game frame.
Unforunately the latest stuff I’ve seen is all about keeping character consistency, which is basically having a fixed frame of reference for every generation. What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details? Perhaps overall it is faster per frame to generate a 2D image vs. tracking all the polys.
Not saying which is right to do, there’s lots of baggage with discussing AI stuff, just wondering about the actual tech itself.
Polys is not where expensive computation is. The bottleneck is raytracing and volumetric fog etc. All those things that make a game look more real and natural.
I think this dlss stuff could potentially substitute raytracing and other light/shadow/reflections/transparency things that are very expensive to both program correctly and calculate every frame.
Lighting is the space image gen struggles in most now. Individual areas will show convincing shadows, atmosphere, etc, but motivation and consistency is lacking. The shots from Hogwarts Legacy show that really clearly. Slice out a random 10%x10% chunk of the frame and the lighting looks more realistic, but the overall frame loses the directional lighting driven by real things in the scene.
I’m curious how well it handles lighting from unseen light sources that otherwise didn’t contribute as much to the scene as they should have. In other words, off screen lights that shine into the scene but are not fully rendered by traditional means. Same thing goes for reflections.
I expect a lot of nonsense being hallucinated in those areas.
Neural network would be the most technically accurate given what they’ve announced so far.
There’s no information on if it’s a diffusion or transformer architecture. Though given DLSS 4.5 introduced a transformer for lighting, my guess would be that it’s the same thing just being more widely applied. But the technical details haven’t been released from anything I’ve seen, so for the time being it’s being described as “neural rendering” using an unspecified neural network.
I don’t see how this will stay consistent enough for art directors to sign off on it. It’s effectively just a hallucination based on your current video game frame.
Unforunately the latest stuff I’ve seen is all about keeping character consistency, which is basically having a fixed frame of reference for every generation. What I don’t get not knowing much about the details is how LLM generation is faster than actual 3D modeling with more details? Perhaps overall it is faster per frame to generate a 2D image vs. tracking all the polys.
Not saying which is right to do, there’s lots of baggage with discussing AI stuff, just wondering about the actual tech itself.
It’s not, DLSS5 takes a frame as rendered normally by your GPU and feeds it into a second $3k GPU to run the AI image transformer.
There is no performance benefit, in fact it adds a bit of latency to the process.
And Nvidia claims it will release without the need for a second 50 series card this year.
Lots of bullshit being laid out by Nvidia here.
Polys is not where expensive computation is. The bottleneck is raytracing and volumetric fog etc. All those things that make a game look more real and natural.
I think this dlss stuff could potentially substitute raytracing and other light/shadow/reflections/transparency things that are very expensive to both program correctly and calculate every frame.
My two cents
Lighting is the space image gen struggles in most now. Individual areas will show convincing shadows, atmosphere, etc, but motivation and consistency is lacking. The shots from Hogwarts Legacy show that really clearly. Slice out a random 10%x10% chunk of the frame and the lighting looks more realistic, but the overall frame loses the directional lighting driven by real things in the scene.
I’m curious how well it handles lighting from unseen light sources that otherwise didn’t contribute as much to the scene as they should have. In other words, off screen lights that shine into the scene but are not fully rendered by traditional means. Same thing goes for reflections.
I expect a lot of nonsense being hallucinated in those areas.
It’s not an ‘LLM’ (large language model). 🤦
I try to avoid the overhyped and wrongly used term AI, so what’s the proper term? Related to diffusion models? Something different?
Neural network would be the most technically accurate given what they’ve announced so far.
There’s no information on if it’s a diffusion or transformer architecture. Though given DLSS 4.5 introduced a transformer for lighting, my guess would be that it’s the same thing just being more widely applied. But the technical details haven’t been released from anything I’ve seen, so for the time being it’s being described as “neural rendering” using an unspecified neural network.
https://www.nvidia.com/en-us/geforce/news/dlss-4-5-dynamic-multi-frame-gen-6x-2nd-gen-transformer-super-res/