I think to your second point, since this AI Instagram filter is based on a 2D picture with no understanding of the light source or materials, even the background with this tech will ruin things.
One of the things I said in another post is that the DLSS demo with Grace was that I wasn’t just thrown by the face, I honestly was also thrown by the brightly lit background that really undermined what that scene was trying to convey.
To me the original scene was somber, she’s visiting a murder scene, the background amplifies that somber, feeling of dread with everything darker, shadows everywhere threatening to consume the light. The DLSS 5 scene, everything is brighter, shinier and now that same scene isn’t helping to convey the getting atmosphere, it’s like the darkness is being overcome.
That’s a very good point, and now that I’ve seen more coverage (including this video) and got a better understanding of how the tech works (based on 2D frames plus motion vectors) I’m inclined to agree.
I wonder if it would be possible to feed this system information about object materials and 3D lighting and spatial atmosphere so it’s not just “painting over” every frame? Or if that would even help it? Because the way it works now, I’m honestly concerned every major new game will soon start to look homogenous.
I think to your second point, since this AI Instagram filter is based on a 2D picture with no understanding of the light source or materials, even the background with this tech will ruin things.
One of the things I said in another post is that the DLSS demo with Grace was that I wasn’t just thrown by the face, I honestly was also thrown by the brightly lit background that really undermined what that scene was trying to convey.
To me the original scene was somber, she’s visiting a murder scene, the background amplifies that somber, feeling of dread with everything darker, shadows everywhere threatening to consume the light. The DLSS 5 scene, everything is brighter, shinier and now that same scene isn’t helping to convey the getting atmosphere, it’s like the darkness is being overcome.
That’s a very good point, and now that I’ve seen more coverage (including this video) and got a better understanding of how the tech works (based on 2D frames plus motion vectors) I’m inclined to agree.
I wonder if it would be possible to feed this system information about object materials and 3D lighting and spatial atmosphere so it’s not just “painting over” every frame? Or if that would even help it? Because the way it works now, I’m honestly concerned every major new game will soon start to look homogenous.