Anyone who uses AI is a brainrotted fascist.
Removed by mod
Your comment was placed on hold by being temporarily removed, an admin will review it shortly to approve or deny it. Sorry for the inconvenience.
sometimes people at work (via teams message) forward the question I asked them to chatgpt, then send me a screenshot of the response. we need a slur for these types of people
Sloppers
Cogsuckers
hahaha
Hi. Can we just call people shitheads and NOT come up with a bunch of new slurs please?
I agree. We have enough slurs. If you don’t agree with someone or you think they’re stupid it’s better not to even bother engaging with them.
Okay, mom.
That’s not really an acceptable response for a queer community, full of queer people who’ve spent a large part of our lives being called slurs by people who feel righteous in their hatred.
Removed by mod
No
deleted by creator
Every answer is easy when you’re making shit up.
deleted by creator
…a cluck? Clanker-cuck? 😅
that’s today’s equivalent of https://lmgtfy.app/?q=your+question
huh, seems it stopped working
How about not being someone who uses slurs, that doesn’t help justify your position. People who use slurs or feel the need to use slurs are generally reactionary people who don’t have good counter-arguments.
They will copy the slur into the text completion machine anyway to get a explanation, a summerized list of arguments along with their thoughts.
A slur is just being efficient.
Friendly reminder: LLMs will never be able to reliably generate factual information, no matter how good they get. The best they can do is point you to primary sources who can actually gather information about the world. It is not a source of any information; it is only a language machine.
I really hate how capitalists are painting AI as some kind of everything solution when it’s not. Like all tools it has specific usecases.
Removed by mod
Just a hunch, I don’t think you really actually care. I feel like you’re just trying to continue an argument primarily for emotional reasons, after being told to disengage.
Removed by mod

Removed by mod
Removed by mod
You don’t act at all like someone who doesn’t care, you act like someone who’s butthurt that others don’t unequivocally agree with you. Which is why you put the effort into Sealioning and trolling. There is nothing good that will ever come out of discussion with you. If there was, you wouldn’t be trying to continue to have an argument or trying to fling insults at people for not agreeing with you.
Removed by mod
deleted by creator
Humans can make observations about the world and build up credibility in their ability to do so. LLMs know nothing but words and how to put them together in a way that gets our approval.
deleted by creator
I think they mean humans who don’t know or care about the subject can still pull shit out of their ass and lie to sound smart. And they do it more when they get approval and admiration for it. Doesn’t mean humans can’t, but a lot of them find that easier.
Been using AI for 30 years, and my brain still won’t STFU.
If you don’t think, do you are therefore?
they do not think, therefore they do not am?
sigh alright…
alright—
Is that Scut Farkas?
deleted by creator
Don‘t let the door hit you on the way out. I thought it important to tell you that as you seem incapable of functioning without your stochastic parrot.
deleted by creator
Lemmy is just reddit decentralized, so it’s going to attract the same types of people.
Some anti AI people are so corny. Like there’s so much to hate about AI. It’s evil in tons of different ways. But this just comes off as ignorant.
Removed by mod
Your comment was placed on hold by being temporarily removed, an admin will review it shortly to approve or deny it. Sorry for the inconvenience.
You still have to meet a chatgpt relay drone then. I’ve met some. The conversation with them is basically you asking them something (usually about their assumed field of expertise) and them relaying to you whatever bullshit the chatbot vomits to them. Especially fun when you meet them in a working context where they are supposedly an “expert” that comes to solve an issue for you.
yes I came to the comments to say this! it’s very depressing. people who choose to not use their brain should not have access to llms
deleted by creator
Its literally not tho, it’s pretty accurate.
Also I’d rather be corny and sincere then idiotic and fake.
Its all fun and games until I’m the one being made fun of.
deleted by creator
Ignore them, they’re a troll. They don’t have anything real to add to the discussion, and they never did.
deleted by creator
Okay
It does actually rot your brain though. Like that is literally true information.
Removed by mod
deleted by creator
Evidence of the truth of their argument proves them wrong!! Don’t you see how stupid you are.
deleted by creator
It cuts to ONE OF the roots of the problem. It’s just not the “evil gigacorp” problem.
It’s the problem of the effect on the user, regardless of how evil or altruistic the AI and its creator are.
I have lamented in a few comments recently about how many people seem to think the purpose of technology is to make it so they don’t have to put effort into their life. They don’t need to learn, and they don’t need to create. They just need the right technology and a good enough bank balance to pay for it.
I’m a tech person but for the last couple years I have made my hobbies and home life as much about nature and life sciences and physically interacting with the outdoors, building shit, taking care of my animals, etc. It has been very very good.
Nah it’s pretty funny, this accurately describes a bunch of people (as accurately as a meme can or should, anyway)
if anyone wants one of the recent studies disproving this poster https://ai-project-website.github.io/AI-assistance-reduces-persistence/
was waiting for when it becomes a wierd al yankovic joke. it didn’t
deleted by creator
Agreed. Lots of great reasons to hate on AI, but this isn’t one of them.
Many studies have been released recently about the rapid loss of cognitive abilities and skills due to the use of AI. It’s like how driving everywhere causes your muscles to atrophy, except it’s your critical thinking and reasoning skills, and it starts to happen within days or weeks of relying upon AI to do the work for you. Programmers who use AI and then stop have been found to write worse code after they stopped using AI than before they started, even for basic tasks. Reliance becomes dependence as you can no longer do the work yourself.
This meme is quite literally true.
deleted by creator
Even before the advent of LLMs, more than half of Americans couldn’t read beyond a 6th grade level.
Correlation != causation, but unfortunately the aforementioned problem affects a lot of people.
deleted by creator
Unfortunately most people in the FuckAI crowd only care about facts when those facts support their feelings. Critical thinking and cognitive decline being a long time coming in the last few decades attributed to computer use in general as well as a decline in the competency of the US educational system isn’t as supportive to their arguments as something that says “AI exposure is making people stupid and crazy”.
I’ve found that around 80% of FuckAI discourse is down to preconcieved biases and personal dislike while clinging to the 20% of real and very hyper specific arguments to try and support those opinions as factual or objective.
It’s why the top 3 anti-AI arguments are:
- Intellectual property
- Climate change (Carbon footprint, if the subject is LocalLLMs)
- Brain rot arguments, which might use very new, possibly flawed studies, or just abuse clinical terms as if they’re slurs (“schizo”, “psycho”, “delusional” etc.)
deleted by creator
I wouldn’t call that list hyperspecific at all. You forgot:
- surveillance state being built on the back of these ai agents
- military application of ai
- the acceleration of the breakdown of Internet social spaces
- the inability to do a simple Google search without having to sift through thousands upon thousands of garbage AI articles
It isn’t really. It’s their broad and common arguments. The things that unite them.
The hyperspecific arguments are the ones about environmental impacts. And I say they’re hyperspecific because they only relate to corporate AI models which only exist because of the venture capitalist bubble. Which is being applied to all AI models even the small FOSS ones.
Mental related ones are hyper specific because they only apply to specific unhealthy use cases, but are being applied broadly to everyone (i.e. people who call me a “schizo” for sharing Art made by an AI).
The factual arguments are hyper specific.
So if you discount 99% of AI then we wouldn’t have anything to be upset about? That doesn’t seem very coherent.
Edit: This isn’t the first time I’m hearing your arguments. It reminds me of the metaverse arguments where every failed metaverse is not really the metaverse. The problem with that is you don’t get to dictate what counts as AI, and what doesn’t. Those other options you talk about aren’t the problem, and so they’re being rightly ignored in this discussion. Just because there are good guys with guns doesn’t mean the bad guys with guns are not a problem, especially when the bad guys have more/bigger guns.
You do realize that studies take time, the larger the scope and context the longer it takes to complete. Of course the first studies to come out are context dependant. What does that have to do with the price of butter?
deleted by creator
















