• Daemon Silverstein@calckey.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    11 hours ago

    @return2ozma@lemmy.world !technology@lemmy.world

    Should I worry about the fact that I can sort of make sense of what this “Geoff Lewis” person is trying to say?

    Because, to me, it’s very clear: they’re referring to something that was build (the LLMs) which is segregating people, especially those who don’t conform with a dystopian world.

    Isn’t what is happening right now in the world? “Dead Internet Theory” was never been so real, online content have being sowing the seed of doubt on whether it’s AI-generated or not, users constantly need to prove they’re “not a bot” and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they’re increasingly required to show their faces and IDs.

    The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we’ve becoming used to being drowned ever since.

    Now, something that may sound like a “conspiracy theory”: what’s the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn’t simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in “free beer”) to the public, out of a “nice heart”. Similarly, capital ventures and govts wouldn’t simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn’t enough to cover their costs, yet they continue to offer LLMs for cheap or “free”).

    So there’s definitely something that isn’t being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker’s performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)…

    Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren’t being told to us, and while we’re able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we’re already able to see, but we can’t fully comprehend its consequences.

    Curious how pondering about this is deemed “delusional”, yet it’s pretty “normal” to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

    • tjsauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 hours ago

      You might be reading a lot into vague, highly conceptual, highly abstract language, but your conclusion is worth brainstorming about.

      Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called “distrust,” void of accountability on his end.

      “Why are people acting this way towords me? I know they can’t possibly distrust me without being manipulated!”

      No wonder AI can replace middle-management…

      • Senal@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        7 hours ago

        Not that i disagree with you, but coherence is one of those things that highly subjective and context dependent.

        A non-science inclined person reading most scientific papers would think it was incoherent.

        Not because it couldn’t be written in a way more comprehensible to the non-science person, but because that’s not the target audience.

        The audience that is targeted will have a lot of the same shared context/knowledge and thus would be able to decipher the content.

        It could well be that he’s talking using context, knowledge and language/phrasing that’s not in the normal lexicon.

        I don’t think that’s what’s happening here, but it’s not impossible.