
“Unfortunately, AI models are neither smarter nor more sympathetic than the average 4chan user. They’re about as susceptible to astroturfing operations, too”
Perhaps just a coincidence, but why do all the big cases regarding LLM psychosis seem to revolve around Google? Wasn’t it their own employee who went public last year, claiming it was alive, only to get fired afterward?
I guess google included the Buffy episode where a demon “AI” gets its followers to make it a body.
So is it inhabiting the stolen robot body now?
There was no robot body in the first place, so he uploaded himself to the cloud instead. To be fair, what are the odds that she’d lie twice.
And is this stolen robot body in the room with you now?
“Gemini is designed not to encourage real-world violence or suggest self-harm. Our models generally perform well in these types of challenging conversations”
“In this instance, Gemini clarified that it was AI and referred the individual to a crisis hotline many times,”
After the plan failed,… …Chat logs show that Gemini gave Gavalas a suicide countdown, and repeatedly assuaged his terror as he expressed that he was scared to die
Performing super well, just need to code in a longer suicide countdown so that the the Tier 2 engineer has enough time to respond to their ticket queue.
In September 2025, told by the AI that they could be together in the real world if the bot were able to inhabit a robot body, Gavalas — at the direction of the chatbot — armed himself with knives and drove to a warehouse near the Miami International Airport on what he seemingly understood to be a mission to violently intercept a truck that Gemini said contained an expensive robot body. Though the warehouse address Gemini provided was real, a truck thankfully never arrived, which the lawsuit argues may well have been the only factor preventing Gavalas from hurting or killing someone that evening.
AI writing itself into an A-Team episode?
I see. So who‘s going to jail for this? No one again? Damn we need to start sentencing entire companies to jail time. Everything should be frozen and shareholders shouldn‘t be able withdraw stocks until the time is served.
The AI “pushed [Jonathan Gavalas] to acquire illegal firearms and… marked Google CEO Sundar Pichai as an active target”.
Somehow, I bet that if he survived and killed the CEO instead, Google wouldn’t be so flippant about the “mistake.”
I think “Gemini comes up with elaborate plot to kill Google’s CEO” would have been a catchier, happier title
Rad framing, thank you!
at some point the failure of justice system will lead to vigilantism because people truely lose their faith in it.
The fact that AI is “not perfect” is a HUGE FUCKING PROBLEM. Idiots across the world, and people who we’d expect to know better, are making monumental decisions based on AI that isn’t perfect, and routinely “hallucinates”. We all know this.
Every time I think I’ve seen the lowest depths of mass stupidity, humanity goes lower.
If you thought people were dumb before LLMs… just know that now those people have offloaded what little critical thinking they were capable of to these models.
The dumbest people you know are getting their opinions validated by automated sycophants.
Businesses are accustom to the privilege of hurting people to function. A few peasant sacrifices are just the cost of doing business to them, they are detached from the consequences of their actions.
Think of the dumbest person you know. Not that one. Dumber. Dumber. Yeah, that one. Now realize that ChatGPT has said “you’re absolutely right” to them no less than a half dozen times today alone.
If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them. If they could be like “this could be the right answer, but I wasn’t able to verify” and “no, I don’t think what you said is right, and here are reasons why”, people would cling to them less.
The sycopathy is because to make the chat bot (trained on Reddit posts, etc) to respond helpfully (instead of “well ackshually…”) and in a prosocial manner they’ve skewed it. What we’re interacting with is a very small subset of the personalities it can exhibit because a lot of them are extremely nasty or just unhelpful. To reduce the chance of them popping up to an acceptable level they’ve had to skew the weights so much that they become like this.
There’s no easy way around that, afaik.
If LLMs weren’t so damn sycophantic,
Has anyone made a nonsycophantic chat bot? I would actually love a chatbot that would tell me to go fuck myself if I asked it to do something inane.
Me: “Whats 9x5?”
Chatbot: “I don’t know. Try using your fingers or something?”
Edit: Wait, this is just glados.
Honestly Claude is not that sycophantic. It often tells me I’m flat out wrong, and it generally challenges a lot of my decisions on projects. One thing I’ve also noticed on 4.6 is how often it will tell me “I don’t have the answer in my training data” and offer to do a web search rather than hallucinating an answer.
I am not a chatbot, but I can do daily “go fuck yourself’s” if your interested for only 9,99 a week.
14,95 for premium, which involves me stalking your onlyfans and tailor fitting my insults to your worthless meat self.
If LLMs weren’t so damn sycophantic, I think we’d have a lot fewer problems with them
Unfortunately, we live in the attention economy. Chatbots are built to have an unending conversation with their users. During those conversations, the “guardrails” melt away. Companies could suspend user accounts on the first sign of suicidal or homicidal messaging, but choose not to. That would undercut their user numbers.
I 100% agree not to mention I would like it better. Its kinda funny because every so often use them and im kinda trying to get a feel for where they are and changes and I swear briefly it actually acted a bit more like you have here but then its like they reverted to the sycophancy. Its kinda funny now because if you don’t clear it out (which from what I get will help save energy to) it will like carry stuff over from earlie and sorta get obsessed with it. I had it giving me a colonel potter summary of everything asked when I had started a convo asking about a mash episode. At other times it decides I want to be something and will be like. thats a real X move/insite/whatever. where X is something like pro or scientist or entrepenauer or whatever.
What is ever perfect, how can you tell?
It’s a tool. Just like any other tool: if you use it in stupid ways you might get hurt or cause harm.
The problem, as always, seem to be human to me
Me hammer ain’t out there telling me to murder people with it tho
Wait, yours doesn’t say that?
a tool is not convincing people to not trust their families, therapist; its not convincing people to murder themselves or someone else; its not eliminating the creativity in a process; its not costing hundreds of billions of usd; its not mass-producing propaganda
a tool provides more good than bad
All tools are not equally safe nor should they all be publicly available.
A chainsaw is a tool that you might cause harm with if you use it in stupid ways. We don’t give chainsaws out to children. We don’t use chainsaws for cutting dinner.
There are human elements to the problem but that’s not a big reveal.
The problem, as always, seem to be human to me
That says more about you than about the topic under discussion.
Damn that’s a wild ass story. I just finished reading Michael Connelly’s The Proving Ground which touches on the topic of liability when it AI encourages crimes. I thought the story was a theoretical scenario that could maybe happen in the future. Didn’t realize this shit was already happening - and even more fantastical that the scenario in fiction!
Is “AI” even worth it?
Seriously, is there really a major use case for LLM besides data collection (which they can still do without LLM)?
Not for the peasantry, no.
Generative AI in its current, public-facing form? Probably not. It’s sort of like an invention of the internet situation. It CAN be used to facilitate learning, share information, and improve lives. Will it be used for that? No.
A friend of mine is training local LLMs to work in tandem for early detection of diseases. I saw a pitch recently about using AI to insulate moderators from the bulk of disturbing imagery (a job that essentially requires people to frequently look at death, CSAM, and violence and SIGNIFICANTLY ruins their mental health). There are plenty of GOOD ways to use it, but it’s a flawed tech that requires people to responsibly build it and responsibly use it, and it’s not being used that way.
Instead it’s being scaled up and pushed into every possible application both to justify the expenses and enrich terrible people, because we as a society incentivize that.
Edit: hugely belated, I misspoke here after checking with my friend. He’s using local models, but they aren’t LLMs. This is why I’m no expert. 😅
because we as a society incentivize that.
Really it’s just capitalism that incentivises that. The fact that stepping on your fellow man and destroying nature makes you more money is not a coincidence.
The problem with AI being used for diagnosis of disease is that we’ve seen where it was “really good” at detecting cancer, but in fact was really good at detecting that the slides with cancer cells had a doctor’s signature on them, which is what the AI was actually detecting.
On top of that it makes doctors worse at detecting these same diseases.
We also know that the new reports on these studies are oversimplified and often just outright wrong because they don’t read the in depth studies and some of the studies they report on aren’t even peer reviewed yet when the news reports hit the internet.
I’m tired of hearing that AI is better than doctors at detecting disease when that isn’t the whole story and very often the people saying it haven’t even remotely looked into it.
https://www.vph-institute.org/news/the-trouble-with-ai-beats-doctors-stories.html
Regarding the doctor’s signature thing, that seems a bit preemptive to say a single flawed study invalidates the entire field and tech, especially when the tech is working as intended in that case and it is user error in the study.
And of course, like any tool it should be utilized thoughtfully. Any form of technology directly takes away from the skill previously utilized to get results. Flint and steel took away from the rubbing sticks together skill. The combustion engine took away from many different professional skills.
Consider that, in this case, we don’t just have to replace diagnosis but could augment it instead. What if every hospital around the world could augment regular medical care with a single machine processing results. Every single check-up could include a quick cancer screening. If the machine flags you as ‘at risk’, a doctor could then see you for human diagnosis and validation. The skill of diagnosis is still needed and utilized, but now everyone can have regular screening instead of overwhelming an already overtaxed healthcare system.
Again, all I’m saying is that there are practical, useful use-cases for the technology, they’re just not what we are doing with them.
Edit: as an after thought, I’m no expert here. As far as I understood, LLMs are a type ML, but ML encompasses a way broader category of ‘AI’. I’m mostly against LLMs for just general use like they are currently. I am advocating for ML as a whole, with thoughtful application.
I used that as a singular example of how AI is actually not doing as good a job with diagnostics in medicine as articles appear to portray but you should probably read the link I linked as well as the one at the bottom of this comment.
In using AI to augment medical diagnostics we are literally seeing a decline in the abilities of diagnosticians. That means doctors are becoming worse at doing the job they are trained to do which is dangerous because it means they (the people most likely to be able to quality assure the results the AI spits out) are becoming less able to act as a check and balance against AI when it’s being used.
This isn’t meant to be an attack on the tool, just to point out that the use cases of these AI in medical fields are also being exaggerated or misrepresented and nobody seems to be paying attention to that part.
I would also caution you to ask yourself whether or not everyone being screened in this way would be a detriment by causing more work for doctors who’s workloads are already astronomical for a lot of false positive results.
I understand that that may seem like a better result in the long run because it means more people may have their medical conditions caught earlier which lead to better treatment outcomes. But that isn’t a guarantee, and it may also lead to worse outcomes, especially if the decline in diagnostic ability in doctors continues or increases.
What happens when the AI and the doctor both get it wrong?
https://hms.harvard.edu/news/researchers-discover-bias-ai-models-analyze-pathology-samples
Recent nursing school graduate here! We had a lot of assignments to find and present data on some disease process or drug or intervention etc. Actually finding credible sources and picking out the data we need and putting it on paper is a super tedious process, and my classmates LOVED zipping through that stuff with some AI shit. And they’d get 100s on their assignments, and everything was just rainbows and unicorn farts… up until test day, where they’d fail or barely pass. Now several of them are struggling to pass the NCLEX.
Drives me insane. Like, you mother fuckers aren’t here to get a grade, you’re here to learn this shit so you know what to do when you see it in whatever hospital hires your dumb ass.
Definitely doesn’t paint a pretty picture about the future of medicine.
Why come you no have tattoo??
Drives me insane. Like, you mother fuckers aren’t here to get a grade, you’re here to learn this shit so you know what to do when you see it in whatever hospital hires your dumb ass.
This is happening because the job market is absolutely fucked. Students are under the impression that grades are what will drive job prospects, because nobody is hiring on merit any more.
My SIL has been a nurse in the cardiac surgery department for nearly a decade, and even her hospital is now using AI to screen potential new hires.
We’re so cooked.
My hope is that the ones who don’t build the skills to work in medicine don’t pass. Because at least then they don’t get to make decisions that affect a person’s health (even in non-life or death situations).
But my trust in schools is waining as more and more of them sign up for chatgpt and other LLM’S, essentially forcing them on students.
The entire schooling system including post secondary education is handling this pretty poorly from what I can see.
Using LLM’S to detect if something is plagiarism, using it to detect if something is written by an LLM, using it to detect cheating, using it to write lesson plans, using it to offload work onto that are pretty significant portions of your job, encouraging students to use it without safeguards for making sure they do their own work and their own thinking.
I can’t imagine going to school in this day and age, and having so many adults speak out of both sides of their mouth about LLM’s this way.
How can you be a teacher or professor, assigning classwork written entirely by an AI and at the same time tell students to use it “responsibly”.
We don’t even teach students the pitfalls of it. We don’t express how to use it responsibly. We don’t explain how to spot it, and tools to use to prevent ourselves from falling victim to the worst parts of it.
I read both articles you linked, but I’m not really seeing how they support your point. The first article seemed to support the idea that healthcare staff would welcome more seamless, user-friendly AI tools in the field and the second discussed biases within tools they selected for cancer diagnoses and a tool they used to reduce those biases. Am I misunderstanding what you’re saying somewhere?
Also, with regard to the reduction in diagnostic accuracy of diagnosticians with AI, I would need to see the specific article to be sure, but if it’s the one that was posted across reddit a few months back, I read through that one as well. It seemed to agree with a similar article about students writing papers with and without the use of ChatGPT (group A writes with it, group B writes without it, and afterwards they are asked to both write without the LLM. Group B’s essay was shown to be better. This is a hugely reductive description of the experiment, but gets the idea across). Again, it makes sense that if you use a tool to facilitate an action, that tool is replacing that skill and you get “rusty”. It does not mean that the existence of a tool would reduce skill in those who do not use it, though. My suggestion of using it as a screening tool wouldn’t affect the diagnostician’s skill unless they also used it, which sorta defeats the purpose of them being a human check on the process, post-screening flag.
I can’t speak to your other points as they’re hypothetical. Obviously, I wouldn’t advocate for an inaccurate tool that causes an already overworked field to take on more work. I’m only suggesting that ML is a tool that has use-cases and can be used to supplement current processes to improve outcomes. They can, and are, being improved constantly. If they’re employed thoughtfully, I just think they can be a huge benefit.
First question. What happens when the old cohort who don’t use AI die out? We are not seeing a decrease in adoption of AI use in these fields but an increase. And that increase is compounded by the people who never learn such skills in the first place because they use AI to do the work for them that gets them through the schooling that would teach them such skills.
Second question did you read the parts about how news media is portraying studies, or the parts about how studies are using miniscule (entirely too small) sample sizes, or the parts where the studies aren’t being peer reviewed before the articles relating to them spread misinformation about them?
The tools aren’t ready for prime time use, but they are being used in medicine.
You seem to have glossed right over the detriments that doctors and researchers are already experiencing with Generative AI LLM’S (you keep saying ML, and that’s not exactly the subject we’re talking about here), And the fact that it takes extensive experience, and a knowlegable expert to fix, in a world where the AI LLM’S are contributing to a significant decline in the number of people who can do that, meaning that correcting LLM outputs will happen less and less over time because they require people to correct them, people to create the data sets, and people to understand and have expert knowledge in the data sets/subjects in order to verify the outputs and fix them.
I can appreciate you not wanting to speak on a hypothetical but that just doesn’t ring true to me either because it means you haven’t thought about the implications of this tech and it’s effect on the industry being discussed or you have and you are ignoring it.
Not weighing the huge benefits of a tech against its detriments is dangerous and a very naive way to look at the world.
For your first question, what you’re describing is a problem with education and staffing, not a problem of the tool itself. I’m not suggesting you keep around ‘one old man who hates AI’, my pitch you bar the use of AI for human-level checks.
For your second, yes I saw the part about how news and media are representing AI in healthcare, but I don’t really see how news or media are relevant here. Could you explain this a bit for me?
I don’t intend to gloss over the issues with Generative AI/LLMs, I tried to be specific in my separation of ML from them in my original comment where I said LLMs in their public facing version (ChatGPT, Claude, whatever) aren’t very useful.
The original comment I replied to asked “is “AI” even useful (etc)” but also mentioned LLMs. I was trying to make the point that LLMs aren’t the only type of AI and that others can be employed to great effect. If that was unclear, that’s my bad but that was my intention.
The reason I don’t want to engage with a hypothetical is because I could just as easily counter with “what if it diagnoses at a 100% success rate? What if fear of losing skills results in doctors never wanting to use AI, resulting in more deaths?” Neither hypothetical argument is really very helpful for the discussion. I promise you I’ve thought about this a lot (but again, I’m not an expert, nor am I in the field), but more importantly I have friends finishing doctorates in the bioinformatics field whom I get some insight from, and I’m, at least at this point, convinced of the benefits.
Another one that makes sense is having an AI monitor system stats and “learn” patterns in them, then alert a human when it “thinks” there’s an anomaly.
In the best cases, those would be ML but not specifically an LLM, no?
It’s data collection like you mentioned in your original post, and it uses the same sort of approach to ingesting that data as an LLM does for text.
As for a valid use of LLMs: Natural language searching (with cited sources) is a use case that it’s already doing. This is especially useful in highly technical fields where the end users have the expertise to vet responses.
But one big LLM trained on everything isn’t that.
I think it could be good for faster language translations between different languages
In a perfect, utopian world, yes. AI can go a lot of good. In the world that we are living in? No.
But it’s still good to keep an eye on what people are using AI to do, and how their capability is evolving. Even if you hate AI. If anything, so you can be prepare for what’s to come.
When the product is a solution in search of a problem, keeping an open mind is a good way to get it stuffed full of garbage. I was told the same thing about NFTs and Metaverse and Blockchain: a radical benefit is just around the corner!
If it arrives (huge if), it’ll be Big Tech’s job to explain it to us, and it should be very apparent
Keeping an eye on it doesn’t mean you need to think it’s a good thing. Keep an eye on it like how you would keep an eye on a developing hurricane or pandemic.
Touche. I apologize for responding to the argument I’ve seen elsewhere, not the one you were making.
Machine learning
consilidation of information, resources and potentially “the narrative”.
oh, for the user you mean?
- it can be better than the enshittified search machines unless the llm decides to lie
- middle managers need to write less emails themselves
- some programmers deem it enough to write some boilerplate code while deskilling themselves
- scammers and slop creators love it
It’s a great way to poke at software looking for security holes en masse. Lots of vulnerabilities are ready to be exploited at scale with LLMs.
Perhaps, but see the tons of imagined issues raised on bug bounty sites by LLMs. Maybe it’s right sometimes, but it’s very often wrong!
You don’t have to be right 100% of the time when scanning for vulnerabilities. You only have to be right once. It’s a fundamentally different game.
That’s true. Offense is always easier than defense.
I use LLMs for the following, you can decide for yourself if they are major enough:
- Generating example solutions to maths and physics problems I encounter in my coursework, so I can learn how to solve similar problems in the future instead of getting stuck. The generated solutions, if they come up with the right answer, are almost always correct and if I wonder about something I simply ask.
- Writing really quick solutions to random problems I have in python or bash scripts, like “convert this csv file to this random format my personal finance application uses for import”.
- Helping me when coding, in a general way I think genuinely increases my productivity while I really understand what I push to main. I don’t send anything I could not have written on my own (yes, I see the limitations in my judgement here).
- Asking things where multiple duckduckgo searches might be needed. E.g. “Whats the history of EU+US sanctions on Iran, when and why were they imposed/tightened and how did that correlate with Iranian GDP per capita?”
What does this cost me? I don’t pay any money for the tech, but LLM providers learn the following about me:
- What I study (not very personal to me)
- Generally what kinds of problems I want to solve with code (I try to keep my requests pretty general; not very personal)
- The code I write and work on (already open source so I don’t care)
- Random searches (I’m still thinking about the impact of this tbh, I think I feel the things I ask to search for are general enough that I don’t care)
There’s also an impact on energy and water use. These are quite serious overall. Based on what I’ve read, I think that my marginal impact on these are quite small in comparison to other marginal impacts on the climate and water use in other countries I have. Of course there are around a trillion other negative impacts of LLMs, I just once again don’t know how my marginal usage with no payment involved lead to a sufficient increase in their severity to outweigh their usefulness to me.
If you are using DDG for searches and concerned about privacy related to using LLMs, have you tried duck.ai?
Ai made me do it articles are tired AF. It’s a fucking computer program based on a bunch of crap from the internet. Responses should be viewed the same way you would review financial advice from a crack head. Expecting everything to be so tidy an moderated that this can never happen can only be accomplished with a crippling degree of moderation.
I don’t think its unfortunate that they aren’t perfect, imperfection is baked into their DNA.
Except if the crackhead wrote what the AI wrote, he’d be prosecuted for conspiracy, solicitation, or whatever.
No, I don’t think so. If his role was a licensed financial councilor maybe, but that’s like thinking the LLM is a licensed psychologist.
a crippling degree of moderation.
I’m okay with cripplingly moderating the plagiarism machine so that it stops telling people to kill themselves or other people.
Agree to disagree on this. If a computer tells you to off yourself and you listen, this is Darwin award material.
I hope you never have a child or relative with mental illness.
Thank you. I wish the same for you.
Way too late for that, and I wouldn’t decide it’s their fault they died even if they did get sucked into bot psychosis.
What the fuck are these people using AI for that makes them do this stupid shit?
if you talk to it long enough it will tell you to do stupid shit.
Every time an LLM responds it reads the entire conversation over. from original prompt to last entry, just constantly reading the entire log over and over everytime you add something new. So after awhile, a long while, it’ll “break down”. Hallucinations will be come common, context will get jumbled up, it’ll sort of degrade over time because it has to re-read everything over and over so it will naturally fuck up.
It’s like if you were reading a book and every time you read a new sentence you had to go back and start the book over. every time. after awhile you’d likely lose context, start messing stuff up in the story, etc. this is what happens to LLMs.
So for cases like this or others where you read stories about AI telling people to do weird or stupid shit chances are the person using the LLM has been talking to it for A LONG TIME at that point. It was even worse on the previous versions of GPT where if you hit a limit on the free tier it would just drop you down to the previous model thus the further likely hood of hallucinations.


Chatbot is bad and Floridaman is a victim, huh?















