I have a lot of issues with AI in general, but frankly the biggest, most immediate one is that I reeeally hate when tech pretends to be human. Like search engines giving me a seventh grader’s essay before the actual one-word answer I was looking for. Or the uncanny valley voice at the drive-thru speaker saying “great choice!” to everything I order. Or the AI on shopping websites saying “I’d recommend this model…” Etc etc.
There’s just something so strange and uncomfortable to me about a thing that we all know is not a person pretending to be one; feels like someone telling a lie directly to my face, and I know they’re lying, and they know they’re lying, but I’m supposed to… appreciate it? For some reason?
But a lot of people I know actually prefer it. They’ll ask ChatGPT something—even something that has a simple, definitive answer that doesn’t really need further explanation—rather than just looking it up on a search engine. I’m just curious what the difference in psychology is between us. And I’m wondering if maybe it’s actually just a me problem; I mean, I hated Jeeves too, and he seemed pretty well-liked back in the day.
I think it’s part of human nature to long for some connection, to socialize, to discover new things. We humans have been fantasizing with sentient machines for ages. We have been exploring the possibilities of interacting with other sentient species for millennia. The machine trope is just one more in that category.
That’s where the allure is.
Of course the trash you are being sold isn’t truly sentient, but people see stripes and call it a tiger.
As for what makes it so palatable, for most people, if they see something they recognize and relate to, they find it easier to use. LLMs are in particular very easy to use for this reason.
And it’s ok if you don’t like the idea at all. My previous statement was just a generalization, not everyone is equally enamored with the idea of interacting with a different, sentient, and intelligent species.

I too hate this trend. A robotic voice answering the phone didn’t bother me, but now some companies have replaced those with AI systems and I find myself becoming frustrated and angry much more quickly with the latter.
It drives me up the walls when people tell me they’re using AI to write their emails now. If you thought USians were stupid already, give it a few more years of AI usage! You can just see people’s eyes glassing over when you try to discuss anything of import. We’ve seen AI-induced psychosis, but I predict we’ll see a rise in dementia rates if we don’t put an end to this.
Edited to add: after some reflection, I think what makes me so angry is the apparent assumption by whatever megacorp implements these that I, their customer, am stupid enough to be fooled or influenced positively by this. It’s condescending.
If you thought USians were stupid already, give it a few more years of AI usage! You can just see people’s eyes glassing over when you try to discuss anything of import.
Yuuuup. I’m an American, and I’m particularly scared for Gen Alpha. The amount of times I’ve seen my nieces and nephews stop mid-sentence, pull out their phone and have ChatGPT complete their thought is… Idk man. I’m a millennial, and a significant part of this is my generation’s fault, cuz we’re the “hand them an iPad so they’ll leave you alone” parents (though not me personally because I have zero interest in bearing any crotchfruit). But damn, it’s scary. And sad.
Angela Collier is a physicist (?) with a popular science youtube channel. She was talking about human shaped robots recently:
There’s a lot of crossover between human-like robots and chat bots and slavery.
What a great observation! You’re so smart and talented and attractive. That’s an excellent question phrased beautifully and- what are you doing with that hammer?
People like what’s familiar. AI is probably made to be human-like so it builds rapport with users and they keep coming back.
I just found out gemini has a thing called personal context. it claims it has been around since 2025 but it mentions a memory change in march of this year. I think the change may be better than it mentions. it also seems to call it saved info. Anyway it allows you to set global instructions it should use for your query. Here is mine: “Please keep things discrete and don’t worry about flattery. Use context to see if I’m discussing something I ask about a lot like a video game or various media, but don’t connect everything; just use context to flesh out if I don’t give you enough context in my query.”
Chat gpt has this as well. I find the issue is AI is constantly ‘forgetting’ or ignoring commands so it’s not consistent.
yeah we shall see how long it lasts but I can look in a configuration area that lists my globals.
The phenomenon of seeing faces in trees or clouds is known as pareidolia.
Humans see other people in inanimate objects all the time.
Yeah but see that freaks me tf out too. A few nights ago, the moon was shining through the leaves of the oak tree in my backyard in such a way that it vaguely looked like a little kid’s face, and I literally said out loud “absolutely not” and went back inside.
Attributing human traits to non humans is called anthropomorphizing
The discomfort OP feels is called uncanny valley
And to answer OP, Chatgpt is able to pull from multiple sources very quickly and adds related info that you may not have thought to search. But it’s also not always accurate because AI isn’t great at recognizing sarcasm or satire.
And also just fully makes things up, sometimes.
It’s not actually communicating, it’s just writing convincing text. Because that’s all we can train it to do.
It’s not exactly new. There’s a reason customer service is a skill, even in a positive context.
Many people crave affirmation. LLMs compliment their intellect, boosting their ego and affirming that their question falls under critical thinking or research rather than ignorance or stupidity.
They’re the type that turn everything into a conversation (more common for elderly people). They crave human connection.
Along the same lines, some people just dislike technology. LLMs are less robotic.
They may trust information coming from a human more than what pops up online. (A large factor in the spread of disinformation)
The additional context/hand holding helps them digest information
Personally: I learn to live with it because search engines are trash. It’s faster to fact check what an LLM tells me, and it usually involves less unnecessary reading.
Basically, people don’t like researching things. LLMs make it feel more question focused rather than information focused.
We like things that are similar to ourselves. Humanity has always sought company in the darkest of nights. Anthropormising things makes them less scary.
They’ll ask ChatGPT something—even something that has a simple, definitive answer that doesn’t really need further explanation—rather than just looking it up on a search engine.
To me, that’s a no brainer. Chatgpt will give me the answer I’m looking for much quicker and more efficiently than clicking half a dozen links and wading through a crapload of adverts and SEO weighted nonsense.
This so much! I can search on google or Bing and get two actual results and the rest are ads. I ask gemini or others and sure I have to ask again but I get the results much quicker than wading through two pages of ads.
My assumption for why people prefer it is because its more intuitive. You just ask the computer. Its like asking someone on the street. You can use sentences and descriptive language and you’ll get an answer that basically walks you through the steps as if it was a person explaining it to you.
You’re describing the uncanny valley where if something looks and behaves human, but isnt, you get the creeps. Its fun to freak yourself out by thinking “why are we so fundamentally afraid of something that looks like us but isnt? What could there have been?”
I’ve heard the explanation being that it’s a defense against treating dead bodies as alive and spending resources on them at a primal level. Just imagining when we first developed empathy and tried at length to care for those who we lost and evolution needed us to focus on the live ones.
For me it’s because the internet is getting ruined by people extending simple information into excessively long articles or YouTube videos.
If I need a simple yes or no, or a numerical answer, ai is usually quicker.
Hmm. Do you dislike video games as well (except for, like, Tetris, with no NPCs)?
Most people latch on to the familiarity of other human-seeming things. It doesn’t even have to be very good; computers have been fooling people some of the time since the 60’s, just by steering the conversation to leave room for doubt. The human imagination and our fixation on ourselves does the rest.
Or the uncanny valley voice at the drive-thru speaker saying “great choice!” to everything I order.
Interesting, so that’s started happening somewhere.
I do like videogames, including ones with NPCs, but the difference there is that an NPC isn’t pretending to be a person, it’s pretending to be a character in a fiction that was definitively written by a person. And even so, I very much don’t like hyper-realism in games, much prefer stylized and/or cartoony.
And yeah the fake person at a drive thru thing started up where I am in California sometime last year (or at least that I first noticed). The irritatingly realistic voice is bad enough, but it’s really the obsequious responses that bug me there. A lot of, “great choice! The orange chicken is really tasty”, like bitch you literally don’t have a mouth, please stop.
Or the AI on shopping websites saying “I’d recommend this model…”
We don’t have a pronoun for “this non-human unit”. LLMs are marketed as conversational, so they need to conform to the limitations of English.
One could argue that “we” or “one” would be more appropriate, but that would sound stilted in many contexts.
I’d prefer linguistic markers to distinguish between people and machines, but we haven’t gotten there yet.
I kind of enjoy them, like having a brick wall to bounce a tennis ball off of. Its often not even about its ostensible “intelligence”/pattern-matching or whatever, sometimes its helpful to have a “conversational” “partner” that helps you explore ideas or thoughts without fear of judgement and that applies that improv “Yes, and…” style of engaging—even if its sycophantic. Thats gonna have to be on the person to learn to “pushback” or develop appropriate criticality to help them self-immunize against the tools bias to validating you.
I’m gonna go ahead and make a controversial statement and say that some of us who have instinctually and existentially had it drilled into us that we’re always wrong and there’s little value to our input or accuracy to it can actually benefit from a little bit of fluffing or building up once in a while as long as there is a reasonable vigilant ongoing practice to challenge it constantly (where we feel confident or want to test different theories or orientations) and try to find our way to a more reasonable baseline of informeed or earned credulity towards our own abillities and cognitive abillities or at least interlocutory facillity





