• RobotZap10000@feddit.nl
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    6
    ·
    2 days ago

    That “”““human””“” skeleton in the fourth item gave it away immediately. Now that I look at it further, “Isolation & Surveillance” and a picture of a megaphone??? “Fear as a tool of control” with a lightning bolt in someone’s head??? Did OP even read their slop before vomiting it here?

      • Trainguyrom@reddthat.com
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 day ago

        Yeah I’ve seen so much AI slop with the yellow tinge. It’s kinda hilarious that we’re watching AI model collapse in real time but the bubble keeps growing

          • Trainguyrom@reddthat.com
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 day ago

            I’ve also heard theories that its related to lots of “golden hour” photos but ultimately (and this is one of the significant problems with machine learning) the specific cause is unknowable due to the nature of the software

    • Obinice@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      6
      ·
      2 days ago

      What’s wrong with the skeleton? It’s stylised of course as these sorts of icons tend to be, but generally correct. Pelvis, spine, ribs, head, etc.

      The megaphone seems like a very good way to evoke images of an abusive overseer controlling the camp’s prisoners using technology of the modern day, an effective image for a section on monitoring and control, no?

      There is no standardised symbol for fear within a person’s mind, so again, a stylised symbol showing a lightning bolt is fine. Especially given that it is likely there on purpose - think shocks. Shocks of a different kind you may receive under an evil oppressive prisoner camp system (imagine the sudden shock in ones mind as a guard shouts or lashes out at you, I would certainly consider symbolising that in this manner).

      It’s as if you’ve never looked at anything anyone’s made with simple clipart and the like before, and assume everything must be extremely deep and custom designed by experts?

      Even if this were made with the help of AI, I don’t see the message being any less valid, just because the person didn’t go download an image editor to a PC, learn how to use it, learn how to import SVG icons and research for the most appropriate ones, build the image and export it appropriately, etc.

      Not everybody is as skilled or capable as you or I may be in producing something that we might consider simple. Heck, some people only have a smartphone, not everybody has the luxury of owning a PC and proper software, nor the time or inclination to learn such tools.

      The message in this image is conveyed very well, and is relevant to the current fascist regime’s actions in the USA (and indeed is a universally important message).

      If you want to suggest it’s bad (or “slop”, as you so evocatively put it) just because you don’t like the image creator used to put it to print, well, that’s a weird hill to die on, to be honest.

      You better hope your country never duplicates the USA’s slide into fascism, or you yourself may one day end up in a camp… or worse. How quick to attack the people trying to raise awareness of these abuses of human rights then, I wonder?

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        20 hours ago

        that’s a weird hill to die on, to be honest.

        Welcome to Lemmy (and Reddit).

        Makes me wonder how many memes are “tainted” with oldschool ML before generative AI was common vernacular, like edge enhancement, translation and such.

        A lot? What’s the threshold before it’s considered bad?

          • brucethemoose@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            20 hours ago

            What about ‘edge enhancing’ NNs like NNEDI3? Or GANs that absolutely ‘paint in’ inferred details from their training? How big is the model before it becomes ‘generative?’

            What about a deinterlacer network that’s been trained on other interlaced footage?

            My point is there is an infinitely fine gradient through time between good old MS paint/bilinear upscaling and ChatGPT (or locally runnable txt2img diffusion models). Even now, there’s an array of modern ML-based ‘editors’ that are questionably generative most probably don’t know are working in the background.

    • happydoors@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      Wow. It certainly passes the test for first viewing. I fell for it until I read this comment and cannot unsee it now. Good reminder how fast propaganda of any subject can propagate, I guess

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        i had trouble believing this was AI because why would someone use genAI to make, like, 6 clip art images and a wall of text

        • RobotZap10000@feddit.nl
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 day ago

          You should see the commenter that I blocked under mine. Apparently, some people don’t have the technological means to go to PowerPoint Online and Ctrl-C/Ctrl-V some stock images, but they do have the means to prompt slop by mail. Silly me for assuming privilege.