A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.



Because it is the better tool in the usecase that he is engaging with.
You’re setting up an impossible standard, one that you don’t follow yourself.
You know that Social Media is used to spread propaganda throughout the world, leading to hate crimes, genocides, wars, sexual exploitation etc. You’re still using social media. There are many more ethical ways to talk to people, why go with social media specifically?
All you’ve discovered is that there is no ethical consumption under capitalism. You can take anything that a person does and trace the supply chain to find examples of wholly immoral behavior. Unless you plan on living in a cave, you’re going to appear like a hypocrite at the very least if you start picking apart the choices of others under that lens.
I wish you had addressed the first two paragraphs I wrote, as I feel they’re a bit more relevant and tie into the developer’s chosen behavior more than his choice of an AI helper.
What is the standard?
Many platforms make active efforts to suppress the propaganda. But I concede that people do have the need to choose a platform that reaches the widest possible audience, especially if it concerns a project that needs broader attention.
But an LLM isn’t this. An LLM isn’t a platform. It’s a utility tool. One for creation. A previous commenter pointed out that the developer tried to pick a model that isn’t helping the military. So this should show the developer has an ethical stance. Maybe this happened before Anthropic began aiding the military.
I wonder if his choice has been or if it will be changed.