A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.
you can criticise them but ultimately they are a unpaid developer making their work freely available to the benefit of us all. at least don’t harass the developer.
You make a fair point, but I feel like the trolling reaction they gave was asking for more backlash. Not responding was probably the best move.
Trolling? They gave a pretty good answer explaining their reasoning.
I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.
Seems pretty obvious to me that they knew this wouldn’t go over well. It was inflammatory by design.
Yeah ok. True. I think the rest of the post has much more weight, though. But yeah, he should have swallowed that last sentence.
Getting real tired of this armchair activism, man. I get it, we all hate LLMs but it’s literally one or two burnt out guys writing this in their spare time. If people really want to do something useful at least go and review the code and then you can shit on his work for legitimate reasons if you really do find it’s bad. Stop demonizing open source devs ffs.
I mean, I get if you wanna use AI for that, it’s your project, it’s free, you’re a volunteer, etc. I’m just not sure I like the idea that they’re obscuring what AI was involved with. I imagine it was done to reduce constant arguments about it, but I’d still prefer transparency.
I tried fitting AI into my workloads just as an experiment and failed. It’ll frequently reference APIs that don’t even exist or over engineer the shit out of something could be written in just a few lines of code. Often it would be a combo of the two.
Yeah I mean. It’s not like AI can think. It’s just a glorified text predictor, the same you have on your phone keyboard
I expect because it wasn’t a user - just a random passer by throwing stones on their own personal crusade. The project only has two major contributors who are now being harassed in the issues for the choices they make about how to run their project.
Someone might fork it and continue with pure artisanal human crafted code but such forks tend to die off in the long run.
Considering the amount of damage AI has done to well-funded projects like Windows and Amazon’s services, I agree with this entirely. It might be crucial to help fix bigger issues down the line.
I’m the opposite. Its weird to me for someone to add an AI as a co author. Submit it as normal.
Tbh I agree, if the code is appropriate why care if it’s generated by an LLM
If a human is reviewing the code they submit and owning the changes I don’t care if they use an LLM or not. It’s when you just throw shit at the wall and hope it sticks that’s the problem.
I’m more concerned with the admitted OpenClaw usage. That’s a hydrogen bomb heading straight for a fireworks factory.
It’s all about curation and review. If they use AI to make the whole project, it’s going to be bloated slop. If they use it to write sections that they then review, edit, and validate; then it’s all good.
I’m fairly anti-AI for most current applications, but I’m not against purpose-built tools for improving workflow. I use some of Photoshop’s generative tools for editing parts of images I’m using for training material. Sometimes it does fine, sometimes I have to clean it up, and sometimes it’s so bad it’s not worth it. I’m being very selective, and if the details are wrong it’s no good. In the end, it’s still a photo I took, and it has some necessary touchups.
Personally, I have never seen LLM generated code that works without needing to be edited, but I imagine for routine blocks of code and very common things it probably does fine. I dont see why a programmer needs to rewrite the same code blocks over and over again for different projects when an LLM can do that part leaving more time for the programmer to write the more specialized parts. The programmer will still have to edit and verify the generated code, but programming is more mechanical than something like art.
However, for more specialized code, I would be concerned. It would likely not function at all without editing, and if it did function it probably wouldn’t be optimized or secure. However, this programmer claims to have 30 years of experience, and if thats the case then he likely knows this and probably edits the LLM output code himself.
As I have said before, Generative AI is a tool, like PhotoShop. I dont see why people should reject a tool if it can make their job easier. It won’t be able to completely replace people effectively. Businesses will try, but quality will drop off because its not being used by people that understand what the end result needs to be, and businesses will inevitably lose money.
It’s still made by the slop machine, the same one that could only be created by stealing every human made artwork that’s ever been published. (And this is not “just one company”, every LLM has this issue.)
Not only that, the companies building massive datacenters are taking valuable resources from people just trying to live.
If the developer isn’t able to keep up, they should look for (co-)maintainers. Not turn to the greedy megacorps.
A few years ago we were all arguing about how copyright is unfair to society and should be abolished.
Who is we? I wasn’t.
Sure, but these same companies will drag you to court and rake you over the coals if you infringe on their copyrights.
More reason to destroy copyright.
Normal people can’t afford to fight the big companies who break theirs anyway. It’s only really a tool for big businesses to use against us.
Because its used to benefit megacorps in practice. This situation is just more proof of that.
Copyright is what makes the GPL license enforceable.
Licenses only matter if you care about copyright. I’d much rather just appropriate whatever I want, whenever I want, for whatever I want. Copyright is capitalist nonsense and I just don’t respect notions of who “owns” what. You won’t need the GPL if you abolish the concept of intellectual property entirely.
It is offensive to me on a philosophical level to see that so many people feel that they should have control, in perpetuity, over who can see/read/experience/use something that they’ve put from their mind into the world. Doubly so when considering that their own knowledge and perspective is shaped by the works of those who came before. Software especially. It is sad that capitalism has so thoroughly warped the notion of what society should be that even self-proclaimed leftists can’t imagine a world where everything isn’t transactional in some way.
Just like how every other human artist learned how to draw by looking at examples their art teacher gave them, aka “stealing it” in your words.
LLMs are not sentient and they’re not learning.
- Ethical issue: products of the mind are what makes us humans. If we delegate art, intellectual works, creative labour, what’s left of us?
- Socio-economic issue: if we lose labour to AI, surely the value produced automatically will be redistributed to the ones who need it most? (Yeah we know the answer to this one)
- Cultural issue: AIs are appropriating intellectual works and virtually transferring their usufruct to bloody billionaires
“If” doing all the lifting here.
If we ignore the mountain of evidence saying the opposite…
I want to one day make a game and there is no way I’m not prototyping it with llm code, though I would want to get things finalized by a real coder if I ever got the game finished but I’ve never made real progress on learning code even in school
Yeah. Call me if he starts using AI artwork.
so you draw the line at stealing artists work, but not programmers work?
Being a developer, I don’t care if someone else uses my code. Code is like a brick. By itself it has little value, the real value lies on how it is used.
If I find an optimal way to do something, my only wish is to make it available to as much people as possible. For those who comes after.Tbh all programmers have been copy pasting from each other forever. The middle step of searching stack overflow or GitHub for the code you want is simply removed
Exactly. If someone has already come up with an optimal solution why the hell would I reimplement it. My real problems are not with LLMs themselves but rather the sourcing of the training data and the power usage. If I could use an “ethically sourced” llm locally I’d be mostly happy. Ultimately LLMs are also only good for code specifically. Architecture or things that require a lot of thought like data pipelines I’ve found AI to be pretty garbage at when experimenting
Lutris is GPL-licenced, so isn’t it the opposite of stealing?
No, the LLM was trained on other code (possibly including Lutris, but also probably like billions of lines from other things)
LLMs have stolen works from more than just artists.
ALL of public repositories at a minimum have been used as training, regardless of licence. including licneses that require all dirivitive work be under the same license.
so there’s more than just lutris stollen.
So he’s a badass Robinhood pirate that steals code from corporations and gives it to the people?
I don’t support the use of AI tools in general, but i have a soft spot for long-term maintainers. These people generally don’t have enough support for this to be a full-time hobby, and when a project becomes popular the pressure is massive.
If the community wont step up to take the burden off the maintainer, but they still want active development, what can you do? As long as the program continues to be high quality, i cant complain about a free thing.
I don’t know what Lutris is so I guess I’ll continue to not use it.
It’s an automated tool for pulling the latest fixes to get a game running as well as it possibly can with as little fuss as possible. Basically a bunch of scripts to automatically pull mods and configuration options and such, especially for Linux compatibility.
Up until recently, Lutris worked perfectly for me. Ever since around the release of Wine 11, though, cant get anything to even install, let alone play. This might explain my increasing frustration with the app.
Guess I’m going back to using Bottles for the odd game or app I don’t feel like trying to shoehorn into steam.
I am very much a beginner, and until now lutris was kind of my default answer for “how the hell do I get that windows exe installer to spit its entrails so I can run it through wine” (or even native engines like VCMI, Daggerfall Unity and Creatures Docking Station).
For everything that doesn’t come from Steam, obviously.
What is the more direct way? Does Bottles do that? I haven’t tried it yet.
I wonder if they’ll be able to use this ai to finally generate ssl keys for the Debian repo…
Does everything have to be a god damn culture war now?! I really don’t give a fuck how people do their work. Judge the outcome not the workflow. No one gave a damn how sloppy some developers hacked together solutions that are widely used. But suddenly it’s an issue if coding agents are used? WTF.
Stop the damn polarization for completely irrelevant things; we get polarized enough for political reasons; we don’t have to bring even more dissent into our communities and fuck each other up with in-fighting.
Culture war? Lol
Yes, the observation that software quality seems negatively impacted by ai use is not allowed to be expressed, because you don’t observe it.
The culture war part is the call to boycott a project or shit on its author because they use coding agents, as is done throughout these comments. The whole separation into “those who use AI are bad” and “those who hate AI are good” is a culture war. A needless one at that.
TIL fact-based opinions and the arguments that come from them are “culture wars”.
I also brought facts and objective reasoning, yet I get downvoted.
Yet anecdotal comments like “I tested it myself and it sucks” get upvoted; apparently simply because it fits the own worldview.
That’s not polarization to you?
As I’ve said in an earlier thread, AI over engineers code and hallucinates APIs that don’t exist. Furthermore, hallucinations themselves are a very well studied phenomenon that has proven difficult to combat. People have very legit compliments about AI that you seem to be determined to dismiss as nothing more than a culture war.
But those issues get determined by reviews and tests. You determined these issues and worked against them, why do you think the author of Lutris is not able to? Neither I nor the author says anyone should use AI produced results as is (i.e vibe code).
AI has caused plenty of headaches for developers. This isn’t some culture war shit.
That is for each developer to decide, if they can handle it or not.
As I said: judge the result, not the workflow.
As I said: judge the result, not the workflow.
I’ve tested AI myself and seen the results. I’ll judge how I see fit.
I am not talking about the result of the AI. I am talking about Lutris. If the code that ends up in the repo is fine, it doesn’t matter if it was the author, an agent, or an agent followed by a ton of cleanup by the author. If the code is shit it also doesn’t matter if it was an incompetent AI or an incompetent human. Shitty code is shitty, good code is good. The result matters.
While I may not agree that letting AI write the code is a good idea, the complaints are dumber. It’s open source. Just fork it if you have a problem with it.
Trying to hide it is shitty and immature, though. Even more reason to just fork it. They are proving they don’t have the maturity or transparency needed to run a project like that.
I hear this argument a lot about many things but how is “just fork it” the answer? It’s not like just anyone can fork any project and continue developing it. The alternative would be forking it and consider it final version or what?
But if it’s your project and you build the software for yourself, then why should he do what someone else wants?
The just fork it argument is valid as the source is open, there are several projects born from forks. Specifically for game/wine managers there are other options…
Regardless of the slop, I’m grateful to the developers and maintainers that take the time to share their work.
It’s not like just anyone can fork any project and continue developing it.
Why not? Happens all the time.
I hear this argument a lot about many things
Perhaps there’s a reason you hear it so often about so many things?
This same principle also applies to the fediverse. An instance makes policy decisions that you don’t agree with? That’s okay, you can always host your own instance and make your own policies.
Well sure, software gets forked and continued all the time, but there’s quite a stark difference between just using open source software or actively maintain it. Not everyone is a software developer, so I still don’t see why “just fork it” is the answer. Those who have the capabilities probably already thought of it no?
software gets forked and continued all the time
If you understand that this is true, then I don’t really understand your argument. If this happens all the time for other software, then why can’t it for Lutris? You’re just saying that people who are not software developers cannot develop software? Okay… yeah. Those people are already dependent on software developers and their choices for all of the software they use, anyway.
I’m over here on Bazzite learning that I literally can’t uninstall it without switching to a different OS. :/
That’s the one downside of using Bazzite, but Fagus Launcher might be chosen over Lutris (leading to it being removed) if it does well in the testing branch. So, I’d wait and see because you won’t be forced to use Lutris, but, it will sit there menacingly in your Bazzite installation.
That’s awesome! I’m glad work is already being done to allow for alternate launchers. I stick to just Steam, so I haven’t even used Lutris up until now and I was surprised how much it’s baked in to the OS. Trying to uninstall it just leads to its flatpak entry in the Discovery store, where it appears to be not installed. That looked like buggy behavior, and it took some research to learn what was actually going on.
The thing that will change society is the fact this dude is covering up all the Claude LLM contributions, that transparency which fosters trust in projects is fundamentally broken. He is creating a narrative that will allow others to simply use LLM sourced code and hide it in human created code. I don’t like that, it’s pretty disgusting in my opinion, as people should be able to see every bit of code and know who is responsible for it. Mathieu Comandon’s integrity is shattered by this serious trespass and it is one that he shouldn’t be allowed to get away with. Put him on blast for that otherwise, others will try to do the same thing, potentially reducing the quality of open source projects. Claude LLM usage was already rancid enough…Enough for me to blacklist the whole thing.
As a lover of open source, I plan to skill up and start contributing myself…As there is a reality of not having enough time or people power to maintain such massive projects that have a big scope. I’m in the midst of learning the basics and figuring out what the best programming languages are to learn (Python is going to be my first). I don’t want the infection of LLMs to spread any more in the open source community…As there is no way that will turn out to be a net positive for the community.
While I fully agree with what you’re saying here, and that it should be stated, I personally believe that the only thing he’s done here is said the quiet part out loud.
Like other major projects of are stating that the main reason they don’t do a full AI ban is due to the fact that it’s increasingly difficult to be able to look at someone’s code contributions and say, yes, that’s AI versus that’s a human.
I recently made the swap-off of Sublime Text to Visual Studio Code because I was sick of the degradation in Sublime Text and there wasn’t any decent alternatives with the depreciation of atom a few years back.
I was amazed to find that OOtB visual code has a full on AI assisted coding setup with Ai assisted auto completion and suggestions and even has a chat box to talk with the model of choice. This setup by default doesn’t add any credits or attribution, and while isn’t anywhere near as intequate claude setup by default, it’s still AI assisted writing.
The only thing the public brigades are actually doing is making contributors hide that they are using it, which increases the problem like you mentioned.
A much better solution would be people stepping up to the plate and helping these projects, but it’s far easier to complain. I firmly understand why contributors have resorted to hiding the fact they use it, there’s far too much public outcry without enough support to not on most open sourced or publicly supported projects.
It wouldn’t be such a big deal if you weren’t facing immense harassment for using these tools. I don’t blame him for saying fuck it. If the code works, and has been reviewed/modified/approved by a human, then who cares.
I care. GenAI and AI bullshit is general is a massive ethical, environmental, and socio-economic mine field.
Simply accepting the use of LLM tools is going to send the incorrect message, as that can be masked as approval. It will make techbros that peddle slopware bolder, however, I don’t condone harassment (be loud, clear, and don’t harass). It does matter because again, transparency is key and that builds trust in open source projects. You might not care, but there are a lot of people that feel integrity in the code base matters as you are running that shit on your machine if you install it. That is closed source behavior and we cannot even evaluate a lot of the code they sling.
These AI people are so delusional. They contradict themselves immediately.
But I have over 30 years of programming experience
Then you don’t need AI.
In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
??? The common denominator is AI. By using it you are part of the problem. All mainstream AI is trained on stolen data.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services.
Then don’t?
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves.
The “tools” require large amounts of storage, RAM, electricity, water etc etc. The only tool is the end user.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not.
So they are just an asshole and their excuse finding is just irrelevant.
This guy is maintaining that huge project for an eternity in his free time, but entitled hypocrits like you have audacity to call him an asshole. No one needs your recommendations. Even if you have experience to maintain and develop a project for 16 years and your brain is capable to keep everything in the context, and type hundreds of lines manually for the most tedious tasks—good for you, but there are different people with different brains. AI helpers with proper tooling is a good instrument in hands of a good engineer. They are basically better autocomplete and searching tools, and they are amazing ‘rubber duck’ companions making coding process psychologically easier if you stressed, anxious, or depressed, but need the job to be done. If you think what you’re doing, you won’t produce slop whatever instrument you use, if not—you’ll write slop without AI.
When bubble pops soon, AI have to become sustainable economically and ecologically. Same happened during the dotcom bubble.
So, either help the project, or leave opensource devs alone
Nobody is beyond reproach, and nobody gets free passes, especially with the flagrant attitude they’ve shown toward concerns and criticism.
Yeah, you can be glad this guy is putting in his hours and life into a free project but still be upset by the decisions he makes. I get not harassing him but we aren’t allowed to critique people either? Especially for an app that is widely used.
If it upsets you then fork it
I don’t mind if the developer adds AI-generated code, but if they mix it with their own work without appropriate attribution in a way that it could be considered all AI-generated, it may become un-copyrightable.
I don’t think in this case it will really matter since as it’s GPL anyway, so the worst case scenario is some private company takes the code and tries to use it without giving back but I can see the issue with other projects or if they wanted to use a more restrictive license
It’s a mixed bag, I’m pretty neutral on it since it prevents copyleft licensing as much as copyright.


















