I think what we’re seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can’t digest it and get sick. The problem is there’s no way to determine who can handle AI and who can’t.
When I’m reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought “wow, this thing feels so real”. Some people clearly have predisposition to jumping over the “it’s a tool” reaction straight to “it’s a conscious thing I can connect with”. I think next step should be developing a test that can predict how someone will react to it.
Cults and toxic self-help literature have existed before LLMs copied them. I don’t know if LLMs are getting people who couldn’t have been gotten by human scammers.
Scams have many different vectors and people can be vulnerable to them depending on their mood or position in life. Testing people on LLM intolerance would be more like testing them on their susceptibility to viruses.
People can be immunocompromised for various reasons, temporarily or permanently, so as a society public hygiene standards (and the material conditions to produce them) are a lot more valuable. Wash your hands after interacting, keep public spaces clean, that sort of stuff.
Yes, definitely can be a temporary thing which would make it even harder to protect people from. It’s also most likely some spectrum. If you’re “resistance” is at 10 you may not be at risk even at your lowest point. Other people can be at 5 when they are doing great but risk psychosis when they are down for some reason. I just think it’s kind of scary that people interact with it voluntarily (unlike with scammers or cults) without knowing how it will affect them. We all tried LLMs but most of us was lucky so far.
I have yet to see any evidence that AI is inducing problems. People with problems use it just like anyone else and others consider that use problematic.
I think what we’re seeing is similar to lactose intolerance. Most people can handle it just fine but some people simply can’t digest it and get sick. The problem is there’s no way to determine who can handle AI and who can’t.
When I’m reading about people developing AI delusions their experiences sound completely alien to me. I played with LLMs same as anyone and I never treated it as anything other than a tool that generates responses to my prompts. I never thought “wow, this thing feels so real”. Some people clearly have predisposition to jumping over the “it’s a tool” reaction straight to “it’s a conscious thing I can connect with”. I think next step should be developing a test that can predict how someone will react to it.
I bet it’s probably correlated with low education as most things
So you’re saying there’s a chance I can have cheese if I go to college?
Sign me up! Where’s the cheddar?
Unfortunately Its now in the Dean’s pockets 😭
Cults and toxic self-help literature have existed before LLMs copied them. I don’t know if LLMs are getting people who couldn’t have been gotten by human scammers.
Scams have many different vectors and people can be vulnerable to them depending on their mood or position in life. Testing people on LLM intolerance would be more like testing them on their susceptibility to viruses.
People can be immunocompromised for various reasons, temporarily or permanently, so as a society public hygiene standards (and the material conditions to produce them) are a lot more valuable. Wash your hands after interacting, keep public spaces clean, that sort of stuff.
Yes, definitely can be a temporary thing which would make it even harder to protect people from. It’s also most likely some spectrum. If you’re “resistance” is at 10 you may not be at risk even at your lowest point. Other people can be at 5 when they are doing great but risk psychosis when they are down for some reason. I just think it’s kind of scary that people interact with it voluntarily (unlike with scammers or cults) without knowing how it will affect them. We all tried LLMs but most of us was lucky so far.
I have yet to see any evidence that AI is inducing problems. People with problems use it just like anyone else and others consider that use problematic.
Based