AI generated content, which now includes incredibly convincing videos of people, will grow exponentially over the next weeks, months, and years.

At some point, the majority of the content you see will be fake, and any usefulness or connection to humans will be lost.

Even information that you might have previously been able to confirm from a trusted source can (and will) be manipulated in some way, making verification impossible.

This lack of verification, along with the speed at which fake content can now be generated, will make it impossible to defend against.

Even the world of art and communication has been tainted, serving no connection to real people through this digital hellscape.

To that end, when will the internet be so untrustworthy, “soulless”, and useless to you that it crosses the tipping point?

EDIT: Ok, holy fuck. There’s actually a term for what I’m describing: “The Dead Internet Theory”

  • MagicShel@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    22 hours ago

    And once we get to the point where AI content can be generated on-the-fly for each doom-scrolling user based on their behaviour on the platform, it’s game over.

    Only if people want what AI is making. I’ve been using LLMs for about 5 years. I’ve been integrating them into a project for about 3. And I don’t think anyone is going to find AI generated slop entertaining. I have played with generating text, images, music, and once you get over the novelty it wears thin really quickly.

    If you fill someone’s feed with that stuff, they are going to leave over time. But I mean AI isn’t even that concerning to me. I’ve been thinking about this social trust graph tool for a decade. Social media has been overwhelmingly garbage at least that long.

    I’m using tools to blacklist AI sites in search, but the lists aren’t keeping up, and they don’t extend beyond search.

    Crowd source that. Plug a blocklist into a pi-hole and open it up for contribution.

    There will come a point, probably very soon, where companies will figure out how to deliver ads and AI content as if it were from the original source content, which will make it impossible to block or filter out.

    If they do, it will also be impossible for them to track and thus get paid for.

    The internet is largely self-healing. I mean I might have preferred it 35 years ago, and I’m not saying things are great, but you sound like you’re spiraling a bit and I just want you to know things will be alright. I’m way more worried about Trump then AI on the internet.

    • bampop@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 hours ago

      One development we may see imminently is the infiltration of any areas of the internet not currently dominated by AI slop. When AI systems are generally able to successfully mimic real users, the next step would be to flood anything like Lemmy with fake users, whose purpose is mainly to overwhelm the system while avoiding detection. At the same time they could deploy more obvious AI bots. Any crowdsourced attempt at identifying AI may find many of its contributors are infiltration bots who gain trust by identifying and removing the obvious bots. In this way any attempt at creating a space not dominated by AI and controlled disinformation can be undermined