• brucethemoose@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      5 个月前

      What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’

      What about a deinterlacer network that’s been trained on other interlaced footage?

      My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there’s an array of modern ML-based ‘editors’ that are questionably generative most probably don’t know are working in the background.