This was actually the sub-headline of the article but I thought was the more important party of the article.
Speaking with developers and artists at studios that have agreed to DLSS 5, including CAPCOM and Ubisoft, Insider Gaming was told that the DLSS 5 tech was revealed to them at the same time as everyone else.
“We found out at the same time as the public,” said one Ubisoft developer.
Developers at CAPCOM tell Insider Gaming that the announcement and the publisher’s involvement were particularly shocking, as CAPCOM has previously been historically very “anti-AI” with projects such as Resident Evil Requiem and other unannounced projects in development. Some at the publisher fear that the DLSS 5 announcement could prompt a change in the publisher’s view on generative AI and its implementation in its games.



If you put new information on top of a pixel, the pixel is changed and it is no longer the original information. Your headphones example would be more accurately applied to the visual medium as running custom color profiles, like adjusting saturation and contrast. The original information is there (music waveform or pixel color) but affected by delivery (bass boost or colorblind adjustments).
I’m not sure I understand the difference when DLSS is a toggle.
You made exactly my point in your last sentence.
The DLSS 5 effect is less like a different pair of headphones that don’t have a flat response and more like if your music player added AI generated instruments to the songs in your music library. I think that was what the previous poster was arguing (I agree with them).
Part of me wonders if it is internally consistent, or if Leon’s face changes just a little every time he pops up in a new scene in the new RE with DLSS on.
But it’s details not entire extra characters, so it’s literally not “adding instruments” it’s attempting to sharpen details based on prior frames values for various parts of the image.
I guess it depends on how you want to interpret the analogy. It looked to me to be adding facial features, details in the background, and changing lighting from the videos and images I saw.
So maybe with your parameters for the analogy it’s more like if you went to listen to a Lo-Fi, basement recorded song using cheap gear and microphones, and your AI music player “sharpened details” to make it sound like it was recorded in higher detail by shaping the tracks and adding frames and increasing the bit rate while adding impulse responses and other emulated effects to make the recording sound more detailed/hi-fi.
For many people this would ruin the song, but I’m sure there are people out there who would love that sort of feature.
Exactly, thank you for understanding what I’m trying to say, and DLSS isn’t a required feature, it can easily be toggled.
Then you didn’t understand it because that doesn’t apply to DLSS.