A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.



Does everything have to be a god damn culture war now?! I really don’t give a fuck how people do their work. Judge the outcome not the workflow. No one gave a damn how sloppy some developers hacked together solutions that are widely used. But suddenly it’s an issue if coding agents are used? WTF.
Stop the damn polarization for completely irrelevant things; we get polarized enough for political reasons; we don’t have to bring even more dissent into our communities and fuck each other up with in-fighting.
Culture war? Lol
Yes, the observation that software quality seems negatively impacted by ai use is not allowed to be expressed, because you don’t observe it.
The culture war part is the call to boycott a project or shit on its author because they use coding agents, as is done throughout these comments. The whole separation into “those who use AI are bad” and “those who hate AI are good” is a culture war. A needless one at that.
TIL fact-based opinions and the arguments that come from them are “culture wars”.
I also brought facts and objective reasoning, yet I get downvoted.
Yet anecdotal comments like “I tested it myself and it sucks” get upvoted; apparently simply because it fits the own worldview.
That’s not polarization to you?
It’s for sure a polarizing topic, I just don’t see how it’s a culture war. “Sub-culture war” maybe?
Ok maybe I mis-use the word. If that’s the case, sorry about that. But I hope my point comes across anyway: I really really dislike that the community (or multiple communities, even) get split between people who are ok with AI and who are against AI. This is, IMO, completely unnecessary. That doesn’t mean everyone should be ok with it, but we should not judge or condemn each other because of a different opinion on the matter.
If you notice a project goes downhill, it’s fine to criticize the author (or the whole project) for the degredation in quality. If there are strong indicators that AI is involved, by all means leave a snarky remark about that while complaining. But ultimately it’s the fuckup of a human.
What you’re taking issue with though is deeper than ai. It’s online discourse that is so rude and nuance-less.
In any case, this thread is full of people saying things like “that’s his right to do this but he communicated poorly about this” and getting piles of upvotes. So, yes ai is very polarizing in this corner of the Internet, but I think it’s much more at issue here that people don’t like his handling of it. I know that personally if it weren’t for that I probably would’ve thought “hmm sounds sketchy to use ai in a product thousands of people depend on” and kept scrolling. But no, he was a dick about it and is now hiding his use of ai moving forward. So the people who hate AI are extra pissed about it. Likely because they fear others will follow that lead and enshittify the software they currently enjoy.
Is flat vs. round Earth a culture war in your mind?
The way flat earthers act? Yes. They treat it as a culture war. Just like anti-vaxers.
As I’ve said in an earlier thread, AI over engineers code and hallucinates APIs that don’t exist. Furthermore, hallucinations themselves are a very well studied phenomenon that has proven difficult to combat. People have very legit compliments about AI that you seem to be determined to dismiss as nothing more than a culture war.
But those issues get determined by reviews and tests. You determined these issues and worked against them, why do you think the author of Lutris is not able to? Neither I nor the author says anyone should use AI produced results as is (i.e vibe code).
AI has caused plenty of headaches for developers. This isn’t some culture war shit.
That is for each developer to decide, if they can handle it or not.
As I said: judge the result, not the workflow.
This kind of seems like bad advice in general. The process to create a result is often extremely important to be aware of. For example, if possible, I would like to not consume products built with slave labor.
Depends. If you are generally careful about what products/projects you use and audit them, and you notice that the owner has horrible code hygiene, bad dependency management, etc., then sure. But why judge them for the tools they use? You can still audit the result the same way. And if you notice that code hygiene and dependencies suck, does it matter if they suck because the author mis-used coding agents, because they simply didn’t give a damn, or because they are incapable of doing any better?
You’ve likely stumbled on open source repos in the past where you rolled your eyes after looking into them. At least I have. More than once. And that was long long before we had coding agents. I’ve used software where I later saw the code and was suprised this ever worked. Hell, I’ve found old code of myself where I wondered why this ever worked and what the fuck I’ve been smoking back then.
It’s ok to consider agent usage a red flag that makes you look closer at the code. But I find it unfair to dismiss someones work or abilities just because they use an agent, without even looking at what they (the author, ultimately) produce. And by produce I don’t mean the final binary, but their code.
I’ve tested AI myself and seen the results. I’ll judge how I see fit.
I am not talking about the result of the AI. I am talking about Lutris. If the code that ends up in the repo is fine, it doesn’t matter if it was the author, an agent, or an agent followed by a ton of cleanup by the author. If the code is shit it also doesn’t matter if it was an incompetent AI or an incompetent human. Shitty code is shitty, good code is good. The result matters.
There’s a problem with that. The vast majority of Linux users are probably more tech savvy than average but I’d wager not all of them or even the vast majority have the skills to vet the code.
Lots of the people in the gaming space who are having Lutris suggested/recommended to them are not going in to check that code for problems. They install the flatpak on move on with their lives.
It appears (from what I’ve read which isn’t necessarily the end all be all) that the people taking exception to the use of AI to code Lutris are doing so because they do decompile and vet code.
My understanding is that it’s harder to get AI code in general because when it hallucinates it may do so in ways that appear correct on the surface, and or do so in ways that don’t even give a significant indication of what that code is attempting to do. This is the problem with vibe coding in general from my understanding and it becomes harder and harder even for senior code engineers to check the output because of the lack of a frame of reference.
You’re asking people who don’t have the skills to ignore people who do have the skills who are sounding the alarm.
I get that this person is a single person writing code and disseminating it for free. I get that we should be thankful for free and open software. I fully understand why this person might use AI to help with coding.
I understand that they are upset about the backlash. But that was a very much foreseeable consequence of the credits they gave the AI (a choice they made), and honestly the use of AI (which might have been called out later on if they hadn’t credited it).
They shot themselves in the foot with the part of their response that was flippant and a “fuck you” to anyone who might find the use of AI concerning.
There’s also the fact that AI is something that a lot of people in the Linux community at large seem to already be boycotting and boycotting derivatives of it make sense.
Just because you create something for free doesn’t mean people have to use it. Or that people aren’t free to boycott it.