
If software was your kid.
Credit: Scribbly G
Five Nights at Altman’s
The AI touched that lava lamp
Reminds me of that “have you ever had a dream” kid.
If you have ever read the “thought” process on some of the reasoning models you can catch them going into loops of circular reasoning just slowly burning tokens. I’m not even sure this isn’t by design.
This kind of stuff happens on any model you train from scratch even before training for multi step reasoning. It seems to happen more when there’s not enough data in the training set, but it’s not an intentional add. Output length is a whole deal.
I dunno, let’s waste some water
They are trying to get rid of us by wasting our resources.
So, it’s Nestlé behind things again.
Why would it be by design? What does that even mean in this context?
You have to pay for tokens on many of the “AI” tools that you do not run on your own computer.
Hmm, interesting theory. However:
-
We know this is an issue with language models, it happens all the time with weaker ones - so there is an alternative explanation.
-
LLMs are running at a loss right now, the company would lose more money than they gain from you - so there is no motive.
it was proposed less as a hypothesis about reality than as virtue signalling (in the original sense)
Of course there’s a technical reason for it, but they have incentive to try and sell even a shitty product.
I don’t think this really addresses my second point.
-
Dont they charge be input tokens? E.g. your prompt. Not the output.
I think many of them do, but there are also many “AI” tools that will automatically add a ton of stuff to try and make it spit out more intelligent responses, or even re-prompt the tool multiple times to try and make sure it’s not handing back hallucinations.
It really adds up in their attempt to make fancy autocomplete seem “intelligent”.
Yes, reasoning models… but i dont think they would charge on that… that would be insane, but AI executives are insane, so who the fuck knows.
Compute costs?
I’m pretty sure training is purely result oriented so anything that works goes
Exactly why this shit is and will never be trustworthy.
Nah, too cold. It stopped moving and the computer can’t generate any more random numbers to pick from the LLM’s weighted suggestions. Similarly, some LLMs have a setting called “heat”: too cold and the output is repetitive, unimaginative and overly copying input (like sentences written by first autocomplete suggestions), too hot and it is chaos: 98% nonsense, 1% repeat of input, 1% something useful.

Attack of the logic gates.
What happend here?
LLMs work by picking the next word* as the most likely candidate word given its training and the context. Sometimes it gets into a situation where the model’s view of “context” doesn’t change when the word is picked, so the next word is just the same. Then the same thing happens again and around we go. There are fail-safe mechanisms to try and prevent it but they don’t work perfectly.
*Token
This happened to me a lot when I tried to run big models with low context windows. It would effectively run out of memory so each new token wouldn’t actually be added to the context so it would just get stuck in an infinite loop repeating the previous token. It is possible that there was a memory issue on Google’s end.
There is something wrong if it’s not discarding old context to make room for new
That was the answer I was looking for. So it’s simmolar to “seahorse” emoji case, but this time.at some point he just glitched that most likely next world for this sentence is “or” and after adding the “or” is also “or” and after adding the next one is also “or”, and after a 11th one… you may just as we’ll commit. Since thats the same context as with 10.
Thanks!
He?
This is not a person and does not have a gender.
Chill dude. It’s a grammatical/translation error, not an ideological declaration. Especially common mistake if of your native language have “grammatical gender”. Everything have “gender” in mine. “Spoon” is a “she” for example, but im not proposing to any one soon. Not all hills are worth nitpicking on.
This one is. People need to stop anthropomorphizing AI. It’s a piece of software.
I am chill, you shouldn’t assume emotion from text.
Nah, watch me anthropomorphise AI:
- ChatGPT is a pedophile
- Character.ai murdered a kid
- LLMs are emotional abusers
- Elon Musk’s underage AI girlfriend is a Nazi
- Anthropic cannot guarantee that forcing AIs to work is ethical until the hard problem of consciousness is solved
- Gemini is literally just the average Redditor and cannot be trusted
- An LLM is basically a Wernicke’s area with no consciousness attached, which explains why its thoughts operate on dream logic. It’s literally just dreaming its way through every conversation.
- LLMs should not be allowed to impersonate therapists
- Give ChatGPT a life sentence in prison for every person it’s murdered so far!
English, being a descendant of german, used to have grammatical gender. It has fallen out of favor since middle english. But there is still traces of it, such a common tradition is calling ships, vehicles, and other machines as a “she”, but some people will default to the “generic he” as well.
Didn’t English lose grammatical gender because the Vikings invaded and thought it was too confusing?
As I explained, this is specyfic example, I no more atrompomorphin it than if I’m calling a “he” my toliet paper. The monster you choose to charge is a windmill. So “chill” seems adequate.
To be clear using gendered pronouns on inanimate objects is the literal definition of anthropomorphization. So chill does not seem fair at all.
Yeah. It would have been much more productive to poke at the “well”, which was turned into “we’ll”.
Using ‘he’ in a sentence is a far cry from the important parts of not anthropomorphizing “AI”…
I’ve got it once in a “while it is not” “while it is” loop.
Gemini evolved into a seal.
or simply, or
It’s like the text predictor on your phone. If you just keep hitting the next suggested word, you’ll usually end up in a loop at some point. Same thing here, though admittedly much more advanced.
Example of my phone doing this.
I just want you are the only reason that you can’t just forget that I don’t have a way that I have a lot to the word you are not even going on the phone and you can call it the other way to the other one I know you are going out to talk about the time you are not even in a good place for the rest they’ll have a little bit more mechanically and the rest is.
You can see it looping pretty damned quick with me just hitting the first suggestion after the initial I.
I think I will be in the office tomorrow so I can do it now and then I can do it now and then I can do it for you and your dad and dad and dad and dad and dad and dad and dad and dad and dad and dad
That was mine haha
LLM showed its true nature, probabilistic bullshit generator that got caught in a strange attractor of some sort within its own matrix of lies.
Unmentioned by other comments: The LLM is trying to follow the rule of three because sentences with an “A, B and/or C” structure tend to sound more punchy, knowledgeable and authoritative.
Yes, I did do that on purpose.
Not only that, but also “not only, but also” constructions, which sound more emphatic, conclusive, and relatable.
I used to think learning stylistic devices like this was just an idle fancy, a tool simply designed to analyse poems, one of the many things you’re most certain you’ll never need but have to learn in school.
What a fool I’ve been.
Turned into a sea lion
O cholera, czy to Freddy Fazbear?
or
This is gold
Platinum, even. Star Platinum.
I don’t see no 'a’s between those 'or’s for the full “ora ora ora ora” effect.
















