A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.



Well, I’m not a code monkey, between dyslexia and an aging brain. But if it’s anything like the tiny bit of coding I used to be able to do (back in the days of basic and pascal), you don’t really have to pore over every single line. Only time that’s needed is when something is broken. Otherwise, you’re scanning to keep oversight, which is no different than reviewing a human’s code that you didn’t write.
Look at it like this; we automated assembly of machines a long time ago. It had flaws early on that required intense supervision. The only difference here on a practical level is about how the damn things learned in the first place. Automating code generation is way more similar to that than llms that generate text or images that aren’t logical by nature.
If the code used to train the models was good, what it outputs will be no worse in scale than some high school kid in an ap class stepping into their first serious challenges. It will need review, but if the output is going to be open source to begin with, it’ll get that review even if the project maintainers slip up.
And being real, lutris has been very smooth across the board while using the generated code so far. So if he gets lazy, it could go downhill; but that could happen if he gets lazy with his own code.
Another concept that I am more familiar with, that does relate. Writing fiction can take months. Editing fiction usually takes days, and you can still miss stuff (my first book has typos and errors to this day because of the aforementioned dyslexia and me not having a copy editor).
My first project back in the eighties in basic took me three days to crank out during the summer program I was in. The professor running the program took an hour to scan and correct that code.
Maybe I’m too far behind the various languages, but I really can’t see it being a massively harder proposition to scan and edit the output of an llm.