This was actually the sub-headline of the article but I thought was the more important party of the article.

Speaking with developers and artists at studios that have agreed to DLSS 5, including CAPCOM and Ubisoft, Insider Gaming was told that the DLSS 5 tech was revealed to them at the same time as everyone else.

“We found out at the same time as the public,” said one Ubisoft developer.

Developers at CAPCOM tell Insider Gaming that the announcement and the publisher’s involvement were particularly shocking, as CAPCOM has previously been historically very “anti-AI” with projects such as Resident Evil Requiem and other unannounced projects in development. Some at the publisher fear that the DLSS 5 announcement could prompt a change in the publisher’s view on generative AI and its implementation in its games.

  • PlzGivHugs@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    12
    ·
    1 day ago

    From my understanding, it may be possible to work around some of this, since the program is meant to hook into the game in a number of different ways. Its very possible that an “importance” mask could be added as in input, for example. This wouldn’t fix everything, but would still give a way to separate game elements from environmental details.

    That said, theres been so much focus on how it looks. IMO, its completely overblown, especially when all of this needs to be manually configued on a game-by-game basis. Devs can tweak the settings to their own preferences, and make things more or less extreme.

    The part thats much more worthwhile of mockery is the fact that they’re demoing a consumer product on professional grade hardware, during a hardware shortage. They couldn’t even get the demo working on a high-end gaming PC, and they think this tech is worth advertising? That is the funny part of all this.

    • Ech@lemmy.ca
      link
      fedilink
      English
      arrow-up
      41
      arrow-down
      1
      ·
      edit-2
      1 day ago

      That said, theres been so much focus on how it looks. IMO, its completely overblown, especially when all of this needs to be manually configued on a game-by-game basis. Devs can tweak the settings to their own preferences, and make things more or less extreme.

      It’s wild that every defense of this bs is “Just have devs spend even more time finetuning for this.” Yes, let’s double (or more) the workload of artists and programmers that are already overworked and crunched beyond reason, all for a “feature” that looks like garbage in its showcase demo and that’s so resource intensive that very few users will be able to utilize it, if they even want to.

      • PlzGivHugs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        15
        ·
        edit-2
        1 day ago

        Its more an argument against the, “artisit’s intent” and “disrupting gameplay” points.

        Yes, let’s double (or more) the workload of artists and programmers

        Do you have any evidence for this? Given whats been shown, this seems relatively easy to implement on the game dev side.

        • Quetzalcutlass@lemmy.world
          link
          fedilink
          English
          arrow-up
          21
          ·
          1 day ago

          Even if implementing it turns out to be trivial, testing art assets for quality and consistency will be a nightmare. Especially if the underlying generative AI isn’t deterministic.

          • Katana314@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            ·
            20 hours ago

            Even if implementing it is trivial, it’s also still “one more thing”. Just like optimizing for the Steam Deck, considering features that might not be on the lowest-tier console release, accessibility requirements, and dozens of other checklist items that might go further and further down the list. Worse, if DLSS ends up interfering with those other checklist items after it’s already been verified.

            • PlzGivHugs@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              20 hours ago

              Yes, but what the tech costs to implement has a huge impact on what it is, and how (or if) its ever implemented. So far as I can tell from my own research, the original commenter was lying, which makes sense. If it actually increased dev time that much, even Nvidia wouldn’t be stupid enough to try and sell it. “AI graphics costs $10 million dollars to implement, and has negligible impact on sales.” would not look good for their bubble.

          • PlzGivHugs@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            1 day ago

            Yes, depending on implementation details. I mean, its never going to be completely consistant, but I don’t expect these companies to mind a little brand damage if they get short-term boost in invest.

            I’m more thinking that as it stands, the hardware requirements make it DOA for users. They’re saying they’ll improve it, although I have my doubts. That said, even if no one can run it, it may be popular among publishers for screenshots and marketing. On the other hand, if it does actually double dev costs, then it’ll be DOA even for corporate use.

    • nightlily@leminal.space
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      22 hours ago

      The inputs from everything Nvidia has said, are simply the final pixel colour values and motion vector information. It’s meant to sit in the same post-processing stack as the upscale. It’s effectively a screen-space post-processing filter over the final image. Nvidia have said that the artist controls are masking (blocking certain areas from it), intensity (so a slider value), and some kind of colour re-grading (since it destroys the original grading). It’s extremely limited.

      • cheat700000007@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        21 hours ago

        And Nvidia are full of shit judging from how it clearly changes geometry in the demos, women’s faces in particular

      • PlzGivHugs@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        20 hours ago

        The inputs from everything Nvidia has said, are simply the final pixel colour values and motion vector information.

        If it is the same as DLSS 4 Super Resolution, it seems to use motion vectors, colour buffers, depth buffers, and camera information like exposure. That said, this might change, as, like I said, they’re showing off something they haven’t even got running on the target hardware. Its clearly not even close to being a finished product.