I play games to experience a crafted piece of artistry. DLSS 5 offers none of this, instead replacing the paintbrush held in a human hand with an AI slopping a big vat of oil over the canvas. What are we doing here?
Personally, I don’t know much about this technology. That said, I’ve heard that the original purpose of DLSS was to improve gaming performance, give you more FPS, and so on.
In that sense, many—myself included—are wondering: How is this slop generator going to improve game performance? How is giving Grace from RE9 a totally different face with make up on going to improve my gaming experience?
All of this upscaling when it was presented over a decade ago was to give older cards a longer lease in life and now it’s morphed into the mandatory way to get a stable framerate since developers can now just rely on DLSS and to a lesser extent FSR to get them to a acceptable framerate instead of optimization.
As for how will this improve the gaming experience? I honestly don’t see it at this point. Back when it was the original goal, sure, now with this “Chat GPT moment for graphics”, I see it only beneficial for corporate parasites and “shareholder value” as we wave good bye to artistic vision as everything goes to looking like AI only fans.
I think the idea is that you could use low resolution / detail models that take up less RAM and are faster to process and DLSS hallucinates a high res image.
Not a fan of reimagined stuff of any sort, it usually doesn’t hit well. But from a tech standpoint I can think of ways one could use the tech to improve game performance for new games. Usually making a game run faster or feel more realistic is all about fooling the player, not drawing what’s not seeable, showing hints of things that aren’t really there. Hell, that’s been true for movies and even stage, right?
So my thought on how this could work is to have the actual core models be lower polys, enough for details but not as high as the best we’ve seen done, and minimal texturing. Then the generator uses that as a base to form the image it puts over the top. Still don’t see how that can be done that fast, but apparently we’re there now.
The problem I see is consistency. Whatever the AI generates for a given source won’t be consistent throughout the game. Even in the original Digital Foundry video, you can see how Grace’s face looks like a totally different person depending on the distance from which it’s viewed.
The artistic style is supposed to solve that consistency issue, but this AI is ruining it.
(Also, in the same video, you can see they were using two 5090s to run the DLSS 5 games, so…)
Personally, I don’t know much about this technology. That said, I’ve heard that the original purpose of DLSS was to improve gaming performance, give you more FPS, and so on.
In that sense, many—myself included—are wondering: How is this slop generator going to improve game performance? How is giving Grace from RE9 a totally different face with make up on going to improve my gaming experience?
All of this upscaling when it was presented over a decade ago was to give older cards a longer lease in life and now it’s morphed into the mandatory way to get a stable framerate since developers can now just rely on DLSS and to a lesser extent FSR to get them to a acceptable framerate instead of optimization.
As for how will this improve the gaming experience? I honestly don’t see it at this point. Back when it was the original goal, sure, now with this “Chat GPT moment for graphics”, I see it only beneficial for corporate parasites and “shareholder value” as we wave good bye to artistic vision as everything goes to looking like AI only fans.
It makes “I fixed this ugly FEMALE character” chuds happy and that’s what matters to them.
Saves payroll on artists which means more profits. #winning
The artists are still necessary for the slop generator to have aomething ro slop all over.
I think the idea is that you could use low resolution / detail models that take up less RAM and are faster to process and DLSS hallucinates a high res image.
Not a fan of reimagined stuff of any sort, it usually doesn’t hit well. But from a tech standpoint I can think of ways one could use the tech to improve game performance for new games. Usually making a game run faster or feel more realistic is all about fooling the player, not drawing what’s not seeable, showing hints of things that aren’t really there. Hell, that’s been true for movies and even stage, right?
So my thought on how this could work is to have the actual core models be lower polys, enough for details but not as high as the best we’ve seen done, and minimal texturing. Then the generator uses that as a base to form the image it puts over the top. Still don’t see how that can be done that fast, but apparently we’re there now.
The problem I see is consistency. Whatever the AI generates for a given source won’t be consistent throughout the game. Even in the original Digital Foundry video, you can see how Grace’s face looks like a totally different person depending on the distance from which it’s viewed.
The artistic style is supposed to solve that consistency issue, but this AI is ruining it.
(Also, in the same video, you can see they were using two 5090s to run the DLSS 5 games, so…)