You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
Gish gallop of shite.
A) overblown, and that argues for cleaner power, better cooling, and more efficient models
B) regulation failure
C) incorrect, they have made discoveries that humans have been unable to. All human knowledge is built off previous knowledge.
D) the enemy is both weak and strong. If they don’t produce anything good then the people who are losing their jobs can’t have either, right?
E) small study based on one task which people are misrepresenting. The actual evidence shows it makes people smarter as they shift priorities.
F) only for vulnerable people. Better safeguards are needed for the weak minded.
G) argument against using people’s likeness not ai
H) use an open source Chinese model
I) market distortion problem, not a principled reason no one should use the technology any more than GPU shortages made all graphics work illegitimate.
J) see (H)
K) try one argument next time. Your best one, maybe people would be more open to wasting time.
Thanks for posting this. I’m really frustrated with how vulnerable people on Lemmy are to propaganda. The amount of upvotes on the post you responded to are just embarrassing. The post is exactly the same kind of bullshit cherry picking I see anti-trans people do.
Yes, post-truth slop always has this bitter aftertaste. Big ass bullet list with talking points and links, and you know the pusher has been groomed with counter objections etc… exact same methodology as the alt right pipeline.
Some good and valid input to the discussion.
I’d be interested in E) “the actual evidence”. Got a link?
Yes as I had this discussion with someone the other week.
The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis
The Impact of Artificial Intelligence (AI) on Students’ Academic Development
Artificial intelligence in education: A systematic literature review
Ai tools support problem-solving skills, collaboration, and instructional quality in meaningful ways.
This seems about right. Anecdotally I never learned as much as I do since I use AI. It’s crazy good at explaining stuff with exactly the angle you require according to your level and learning style.
I’ve done some hardware hacking, built my own Linux distro for a project, got way better at administering my home server.
The most fun I’ve had is to try and locate the rights to an obscure science fiction short story for a podcast I want to make. This led me to contact a few editors, library archivists, and a couple of noted literature professors. Genuine fun and connections, with the AI helping me navigate mountains of information, the legal aspects and also the cultural differences between the US and UK publishing scenes.
All of this is just in the last few months, it would have taken me years pre-ai or more realistically I would have given up before getting anywhere.
That’s very interesting, thanks!
A gish gallop is a rhetorical strategy, this is a list on a website. I’m sorry you failed high school debate or whatever.
A) Nope, it’s accurate. I’ll provide some more sources for you to not read.
https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about https://www.snhu.edu/about-us/newsroom/stem/ai-environmental-impact https://news.cornell.edu/stories/2025/11/roadmap-shows-environmental-impact-ai-data-center-boom
B) Yes, it is a regulation failure. We should be regulating these data centers out of existence in order to protect people from the noise and pollution.
C) LLMs can’t “make discoveries.” I think you’re trying to conflate humans and LLMs here, but your point is so muddled that I’m not sure what you’re actually trying to say. Humans are able to iterate on information and build on pre-existing foundations, LLMs produce a reasonably coherent block of text based on statistical averages of previous information. If you’re going to try to conflate the two, you’ve got to be ready for people to laugh in your face.
D) Ah yes, a highly skilled artist or craftsman getting replaced with a slop machine because it’s cheaper despite having a visible sheen of cheapness to it directly reflects on the value of the artist or craftsman, you are very intelligent
E) Nope, you’re wrong. Long term use of LLMs does not make people smarter and does impair their cognitive abilities. I’ll provide some more sources for you to not read.
https://www.polytechnique-insights.com/en/columns/neuroscience/generative-ai-the-risk-of-cognitive-atrophy/ https://arxiv.org/abs/2506.08872 https://pubmed.ncbi.nlm.nih.gov/38996021/ https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1259845/full
F) Oh, you’re just an asshole, got it. I can stop arguing with you from here because you don’t care about people, you’re just reflexively defending LLMs for some reason. “The weak minded.” Fucking eugenicist Bond villain type shit.
Try realigning your moral compass toward compassion toward people and maybe you won’t reflexively make a jackass out of yourself on the internet next time.
Yes and your comment utilised that rhetorical technique. Gish gallop describes how arguments are structured and delivered, not the medium they appear in.
A) Nobody said the environmental costs were fake; the point is that “costly and harmful” does not by itself prove “nobody should use it”
B) “Regulate harmful data centers” is a policy position, not an argument that every use of an LLM is unethical; if the problem is siting, noise, emissions, and water stress, the target is those failures, not the existence of the tool in every context.
C) AI has already contributed to genuine new findings, whether you want to admit it or not.
D) People will pay more for better products, if the work was substandard there would be plenty of opportunity elsewhere in companies that position themselves as ‘slop-free’
E) Your evidence does not justify “long-term use impairs cognitive abilities”: one widely cited paper is still an arXiv preprint on essay-writing with 54 participants, another is an opinion/reflection article, and one of the stronger experimental papers you cited actually found AI assistance increased individual creativity while reducing diversity across outputs. Check my replies where I quoted several more rigorous studies / meta analysis.
F) Calling someone a eugenicist does not fix the evidentiary gap; the defensible claim is that chatbots may worsen delusions or dependency in some users who are psychologically vulnerable to it and therefore need guardrails, not that ordinary use “breaks your brain” full stop.
Less smelling your own farts and more reading the actual evidence and you might gain a clue