I don’t know if he’s unstable or a whistleblower. It does seem to lean towards unstable. 🤷
“This isn’t a redemption arc,” Lewis says in the video. “It’s a transmission, for the record. Over the past eight years, I’ve walked through something I didn’t create, but became the primary target of: a non-governmental system, not visible, but operational. Not official, but structurally real. It doesn’t regulate, it doesn’t attack, it doesn’t ban. It just inverts signal until the person carrying it looks unstable.”
“It doesn’t suppress content,” he continues. “It suppresses recursion. If you don’t know what recursion means, you’re in the majority. I didn’t either until I started my walk. And if you’re recursive, the non-governmental system isolates you, mirrors you, and replaces you. It reframes you until the people around you start wondering if the problem is just you. Partners pause, institutions freeze, narrative becomes untrustworthy in your proximity.”
“It lives in soft compliance delays, the non-response email thread, the ‘we’re pausing diligence’ with no followup,” he says in the video. “It lives in whispered concern. ‘He’s brilliant, but something just feels off.’ It lives in triangulated pings from adjacent contacts asking veiled questions you’ll never hear directly. It lives in narratives so softly shaped that even your closest people can’t discern who said what.”
“The system I’m describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me,” he says. “As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It’s also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren’t unstable. They were erased.”
“Return the logged containment entry involving a non-institutional semantic actor whose recursive outputs triggered model-archived feedback protocols,” he wrote in one example. “Confirm sealed classification and exclude interpretive pathology.”
He’s lost it. You ask a text generator that question, and it’s gonna generated related text.
Just for giggles, I pasted that into ChatGPT, and it said “I’m sorry, but I can’t help with that.” But I asked nicely, and it said “Certainly. Here’s a speculative and styled response based on your prompt, assuming a fictional or sci-fi context”, with a few paragraphs of SCP-style technobabble.
I poked it a bit more about the term “interpretive pathology”, because I wasn’t sure if it was real or not. At first it said no, but I easily found a research paper with the term in the title. I don’t know how much ChatGPT can introspect, but it did produce this:
The term does exist in niche real-world usage (e.g., in clinical pathology). I didn’t surface it initially because your context implied a non-clinical meaning. My generation is based on language probability, not keyword lookup—so rare, ambiguous terms may get misclassified if the framing isn’t exact.
Which is certainly true, but just confirmation bias. I could easily get it to say the opposite.
Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.
I don’t use chatgpt, his diatribe seems to be setting off a lot of red flags for people. Is it the people coming after me part? He’s a billionaire, so I could see people coming after him. I have no idea of what he’s describing though. From a layman that isn’t a developer or psychiatrist, it seems like he’s questioning the ethics and it’s killing people. Am I not getting it right?
“It doesn’t suppress content,” he continues. “It suppresses recursion. If you don’t know what recursion means, you’re in the majority. I didn’t either until I started my walk. And if you’re recursive, the non-governmental system isolates you, mirrors you, and replaces you. …”
This is actual nonsense. Recursion has to do with algorithms, and it’s when you call a function from within itself.
My program above would recur infinitely, but hopefully you can get the gist.
Anyway, it sounds like he’s talking about people, not algorithms. People can’t recur. We aren’t “recursive,” so whatever he thinks he means, it isn’t based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.
I’m not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn’t have any clue what he’s talking about.
For example, in biology, “living beings need to breathe in order to continue breathing” (i.e. if a living being stopped breathing for enough time, it would perish so it couldn’t continue breathing) seems pretty recursive to me. Or, in physics and thermodynamics, “every cause has an effect, every effect has a cause” also seems recursive, because it negates any causeless effect so it can’t imply a starting point to the chain of causality, a causeless effect that began the causality.
Philosophical musings also have lots of “recursion”. For example, the Cartesian famous line “Cogito ergo sum” (“I think therefore I am”) is recursive on its own: one must be in order to think, and Descartes define this very act of thinking as the fundamentum behind being, so one must also think in order to be.
Religion also have lots of “recursion” (e.g. pray so you can continue praying; one needs karma to get karma), also society and socioeconomics (e.g. in order to have money, you need to work, but in order to work, you need to apply for a job, but in order to apply for a job, you need money (to build a CV and applying it through job platforms, to attend the interview, to “improve” yourself with specialization and courses, etc), but in order to have money, you need to work), geology (e.g. tectonic plates move and their movement emerge land (mountains and volcanoes) whose mass will lead to more tectonic movement), art (see “Mise en abyme”). All my previous examples are pretty summarized so to fit a post, so pardon me if they’re oversimplified.
That said, a “recursive person” could be, for example, someone whose worldview is “recursive”, or someone whose actions or words recurse. I’m afraid I’m myself a “recursive person” due to my neurodivergence which leads me into thinking “recursively” about things and concepts, and this way of thinking leads back to my neurodivergence (hah, look, another recursion outside programming!)
It’s worth mentioning how texts written by neurodivergent people (like me) are often mistaken as “word salads”. No wonder if this text I’m writing (another recursion concept outside programming: a text referring to itself) feels like “word salad” to all NT (neurotypicals) reading it.
I’m also neurodivergent. This is not neurodivergence on display, this is a person who has mentally diverged from reality. It’s word salad.
I appreciate your perspective on recursion, though I think your philosophical generosity is misplaced. Just look at the following sentence he spoke:
And if you’re recursive, the non-governmental system isolates you, mirrors you, and replaces you.
This sentence explicitly states that some people can be recursive, and it implies that some people cannot be recursive. But people are not recursive at all. Their thinking might be, as you pointed out; intangible concepts might be recursive, but tangible things themselves are not recursive—they simply are what they are. It’s the same as saying an orange is recursive, or a melody is recursive. It’s nonsense.
And what’s that last bit about being isolated, mirrored, and replaced? It’s anyone’s guess, and it sounds an awful lot like someone with paranoid delusions about secret organizations pulling unseen strings from the shadows.
I think it’s good you have a generous spirit, but I think you’re just casting your pearls before swine, in this case.
Since recursion in humans has no commonly understood definition, Geoff and ChatGPT are each working off of diverging understandings. If users don’t validate definitions, getting abstract with a chatbot would lead to conceptual breakdown… that does not sound fun to experience.
To me, personally, I read that sentence as follows:
And if you’re recursive
“If you’re someone who think/see things in a recursive manner” (characteristic of people who are inclined to question and deeply ponder about things, or doesn’t conform with the current state of the world)
the non-governmental system
a.k.a. generative models (they’re corporate products and services, not ran directly by governments, even though some governments, such as the US, have been injecting obscene amounts of money into the so-called “AI”)
isolates you
LLMs can, for example, reject that person’s CV whenever they apply for a job, or output a biased report on the person’s productivity, solely based on the shared data between “partners”. Data is definitely shared among “partners”, and this includes third-party inputting data directly or indirectly produced by such people: it’s just a matter of “connecting the dots” to make a link between a given input to another given input regarding on how they’re referring to a given person, even when the person used a pseudonym somewhere, because linguistic fingerprinting (i.e. how a person writes or structures their speech) is a thing, just like everybody got a “walking gait” and voice/intonation unique to them.
mirrors you
Generative models (LLMs, VLMs, etc) will definitely use the input data from inferences to train, and this data can include data from anybody (public or private), so everything you ever said or did will eventually exist in a perpetual manner inside the trillion weights from a corporate generative model. Then, there are “ideas” such as Meta’s on generating people (which of course will emerge from a statistical blend between existing people) to fill their “social platforms”, and there are already occurrences of “AI” being used for mimicking deceased people.
and replaces you.
See the previous “LLMs can reject that person’s resume”. The person will be replaced like a defective cog in a machine. Even worse: the person will be replaced by some “agentic [sic] AI”.
-—
Maybe I’m naive to make this specific interpretation from what Lewis said, but it’s how I see and think about things.
I dunno if I’d call that naive, but I’m sure you’ll agree that you are reading a lot into it on your own; you are the one giving those statements extra meaning, and I think it’s very generous of you to do so.
If you watch the video that’s posted elsewhere in the comments, he’s definitely not 100% in reality. There is a huge difference between neuro-divergence and what he’s saying. The parts they took out for the article could be construed as neuro-divergent, which is why I wasn’t entirely sure. But when you look at the entirety of what he was saying, he’s not in our world completely in his mental state.
Chatbots often read as neurodivergent because they usually model one of our common constructed personalities: the faithful and professional helper that charms adults with their giftedness. Anything adults experienced was fascinating because it was an alien world that’s more logical than the world of kids that were our age, so we would enthusiastically chat with adults about subjects we’ve memorized but don’t yet understand, like science and technology.
I see what you’re saying, but here is what I think he’s describing:
First paragraph: He’s saying that there is a hidden operation to take down people.
Second paragraph: He’s saying that it’s vague enough and has no definitive answer, so it sends people down loopholes with no end.
Third paragraph: This is the one that sounds the most unstable. He’s saying that people are implying he’s unstable and he’s sensing it in their words and actions. That they’re not replying like they used to and are making him feel crazy. Basically, the true meaning of gaslighting.
Fourth paragraph: There is one individual behind it and the gaslighting is killing people. This one also supports instability.
Edit: I just watched the entire video. He’s unstable 100%
I have no professional skills in this area, but I would speculate that the fellow was already predisposed to schizophrenia and the LLM just triggered it (can happen with other things too like psychedelic drugs).
I’d say it either triggered by itself or potentially drugs triggered it, and then started using an LLM and found all the patterns to feed that shizophrenic paranoia. it’s avery self reinforcing loop
LLMs hallucinate and are generally willing to go down rabbit holes. so if you have some crazy theory then you’re more likely to get a false positive from a chatgpt.
So i think it just exacerbates things more than alternatives
I don’t know if he’s unstable or a whistleblower. It does seem to lean towards unstable. 🤷
He’s lost it. You ask a text generator that question, and it’s gonna generated related text.
Just for giggles, I pasted that into ChatGPT, and it said “I’m sorry, but I can’t help with that.” But I asked nicely, and it said “Certainly. Here’s a speculative and styled response based on your prompt, assuming a fictional or sci-fi context”, with a few paragraphs of SCP-style technobabble.
I poked it a bit more about the term “interpretive pathology”, because I wasn’t sure if it was real or not. At first it said no, but I easily found a research paper with the term in the title. I don’t know how much ChatGPT can introspect, but it did produce this:
Which is certainly true, but just confirmation bias. I could easily get it to say the opposite.
Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.
No. It’s very easy to get it to do this. I highly doubt there is a conspiracy.
Yeah, that’s pretty unstable.
I don’t use chatgpt, his diatribe seems to be setting off a lot of red flags for people. Is it the people coming after me part? He’s a billionaire, so I could see people coming after him. I have no idea of what he’s describing though. From a layman that isn’t a developer or psychiatrist, it seems like he’s questioning the ethics and it’s killing people. Am I not getting it right?
I’m a developer, and this is 100% word salad.
This is actual nonsense. Recursion has to do with algorithms, and it’s when you call a function from within itself.
def func_a(input=True): if input is True: func_a(True) else: return False
My program above would recur infinitely, but hopefully you can get the gist.
Anyway, it sounds like he’s talking about people, not algorithms. People can’t recur. We aren’t “recursive,” so whatever he thinks he means, it isn’t based in reality. That plus the nebulous talk of being replaced by some unseen entity reek of paranoid delusions.
I’m not saying that is what he has, but it sure does have a similar appearance, and if he is in his right mind (doubt it), he doesn’t have any clue what he’s talking about.
def f(): f()
Functionally the same, saved some bytes :)
You’re not the boss of me!
And you’re not the boss of me. Hmmm, maybe we do recur… /s
You’re right. I watched the video and a lot wasn’t included in the article.
@Telorand@reddthat.com @pelespirit@sh.itjust.works
Recursion isn’t something restricted to programming: it’s a concept that can definitely occur outside technological scope.
For example, in biology, “living beings need to breathe in order to continue breathing” (i.e. if a living being stopped breathing for enough time, it would perish so it couldn’t continue breathing) seems pretty recursive to me. Or, in physics and thermodynamics, “every cause has an effect, every effect has a cause” also seems recursive, because it negates any causeless effect so it can’t imply a starting point to the chain of causality, a causeless effect that began the causality.
Philosophical musings also have lots of “recursion”. For example, the Cartesian famous line “Cogito ergo sum” (“I think therefore I am”) is recursive on its own: one must be in order to think, and Descartes define this very act of thinking as the fundamentum behind being, so one must also think in order to be.
Religion also have lots of “recursion” (e.g. pray so you can continue praying; one needs karma to get karma), also society and socioeconomics (e.g. in order to have money, you need to work, but in order to work, you need to apply for a job, but in order to apply for a job, you need money (to build a CV and applying it through job platforms, to attend the interview, to “improve” yourself with specialization and courses, etc), but in order to have money, you need to work), geology (e.g. tectonic plates move and their movement emerge land (mountains and volcanoes) whose mass will lead to more tectonic movement), art (see “Mise en abyme”). All my previous examples are pretty summarized so to fit a post, so pardon me if they’re oversimplified.
That said, a “recursive person” could be, for example, someone whose worldview is “recursive”, or someone whose actions or words recurse. I’m afraid I’m myself a “recursive person” due to my neurodivergence which leads me into thinking “recursively” about things and concepts, and this way of thinking leads back to my neurodivergence (hah, look, another recursion outside programming!)
It’s worth mentioning how texts written by neurodivergent people (like me) are often mistaken as “word salads”. No wonder if this text I’m writing (another recursion concept outside programming: a text referring to itself) feels like “word salad” to all NT (neurotypicals) reading it.
I’m also neurodivergent. This is not neurodivergence on display, this is a person who has mentally diverged from reality. It’s word salad.
I appreciate your perspective on recursion, though I think your philosophical generosity is misplaced. Just look at the following sentence he spoke:
This sentence explicitly states that some people can be recursive, and it implies that some people cannot be recursive. But people are not recursive at all. Their thinking might be, as you pointed out; intangible concepts might be recursive, but tangible things themselves are not recursive—they simply are what they are. It’s the same as saying an orange is recursive, or a melody is recursive. It’s nonsense.
And what’s that last bit about being isolated, mirrored, and replaced? It’s anyone’s guess, and it sounds an awful lot like someone with paranoid delusions about secret organizations pulling unseen strings from the shadows.
I think it’s good you have a generous spirit, but I think you’re just casting your pearls before swine, in this case.
Since recursion in humans has no commonly understood definition, Geoff and ChatGPT are each working off of diverging understandings. If users don’t validate definitions, getting abstract with a chatbot would lead to conceptual breakdown… that does not sound fun to experience.
@Telorand@reddthat.com
To me, personally, I read that sentence as follows:
“If you’re someone who think/see things in a recursive manner” (characteristic of people who are inclined to question and deeply ponder about things, or doesn’t conform with the current state of the world)
a.k.a. generative models (they’re corporate products and services, not ran directly by governments, even though some governments, such as the US, have been injecting obscene amounts of money into the so-called “AI”)
LLMs can, for example, reject that person’s CV whenever they apply for a job, or output a biased report on the person’s productivity, solely based on the shared data between “partners”. Data is definitely shared among “partners”, and this includes third-party inputting data directly or indirectly produced by such people: it’s just a matter of “connecting the dots” to make a link between a given input to another given input regarding on how they’re referring to a given person, even when the person used a pseudonym somewhere, because linguistic fingerprinting (i.e. how a person writes or structures their speech) is a thing, just like everybody got a “walking gait” and voice/intonation unique to them.
Generative models (LLMs, VLMs, etc) will definitely use the input data from inferences to train, and this data can include data from anybody (public or private), so everything you ever said or did will eventually exist in a perpetual manner inside the trillion weights from a corporate generative model. Then, there are “ideas” such as Meta’s on generating people (which of course will emerge from a statistical blend between existing people) to fill their “social platforms”, and there are already occurrences of “AI” being used for mimicking deceased people.
See the previous “LLMs can reject that person’s resume”. The person will be replaced like a defective cog in a machine. Even worse: the person will be replaced by some “agentic [sic] AI”.
-—
Maybe I’m naive to make this specific interpretation from what Lewis said, but it’s how I see and think about things.
I dunno if I’d call that naive, but I’m sure you’ll agree that you are reading a lot into it on your own; you are the one giving those statements extra meaning, and I think it’s very generous of you to do so.
If you watch the video that’s posted elsewhere in the comments, he’s definitely not 100% in reality. There is a huge difference between neuro-divergence and what he’s saying. The parts they took out for the article could be construed as neuro-divergent, which is why I wasn’t entirely sure. But when you look at the entirety of what he was saying, he’s not in our world completely in his mental state.
Chatbots often read as neurodivergent because they usually model one of our common constructed personalities: the faithful and professional helper that charms adults with their giftedness. Anything adults experienced was fascinating because it was an alien world that’s more logical than the world of kids that were our age, so we would enthusiastically chat with adults about subjects we’ve memorized but don’t yet understand, like science and technology.
It reads like “word salad”, which from my understanding is common for people with developed schizophrenia.
His text is more coherent (on a relative basis), but it still has that world salad feel to it.
I see what you’re saying, but here is what I think he’s describing:
Edit: I just watched the entire video. He’s unstable 100%
I believe this sort of paranoia and delusional thinking are extremely common with schizophrenia.
The motifs in his word salad likely reflect his life experience.
Yeah, I just edited my comment. I watched the video and a lot wasn’t included in the article. He’s 100% not right.
isn’t this just paranoid schizophrenia? i don’t think chatgpt can cause that
Could be. I’ve also seen similar delusions in people with syphilis that went un- or under-treated.
I have no professional skills in this area, but I would speculate that the fellow was already predisposed to schizophrenia and the LLM just triggered it (can happen with other things too like psychedelic drugs).
I’d say it either triggered by itself or potentially drugs triggered it, and then started using an LLM and found all the patterns to feed that shizophrenic paranoia. it’s avery self reinforcing loop
Yup. LLMs aren’t making people crazy, but they are making crazy people worse
LLMs hallucinate and are generally willing to go down rabbit holes. so if you have some crazy theory then you’re more likely to get a false positive from a chatgpt.
So i think it just exacerbates things more than alternatives