So when the news circulated recently that the Lutris developer was using Claude to help write the code (and the angry posts/articles appeared) I figured I’d reach out to Mathieu to hear his side of things.
I chatted to him a little, asking for his side of the story. He goes into some depth on how he uses it as part of his work-flow, the transparency in open-source projects in general, licensing and ownership of code that A.I. writes, safety and so on. Plenty of answers from Lutris, if you’re curious on the topic. As ever, you can find the link here:
https://gardinerbryant.com/mathieu-comandon-explains-his-use-of-ai-in-lutris-development/
Also, there is enough open source code available that I would hope Anthropic doesn’t feel the need to train their models on potentially litigious code base.
The problem with this statement is twofold. Firstly, it is unrealistic to assume that leading AI companies are staying entirely above board in terms of code licensing. With how widespread AI is, this makes it all the harder for developers to enforce their licenses when many developers inevitably violate their terms without knowing.
Even if that code is open source, licensing terms typically require attribution that an AI is unlikely to provide for every segment of code cobbled together. When the developers that had their code taken and reused are unable to know who reused it, it is disingenuous to work under a ‘take first, ask later (if found out)’ mentality.
I already read a lot of the lutris devs’ honest feelings about AI and their willingness to obfuscate what they’re doing with it in the initial issues/discussions. No offense, but I’m not all that interested in watching them attempt to whitewash and downplay what happened after they’ve had time to figure out how to spin it.
The only reason they decided to obfuscate the use of Claude was due to the community starting wars and sending them death threats over it. Nobody is downplaying anything, they literally stated that they did that because managing shit-tier Issues that were all basically “why use AI” was becoming too damaging to the project.
After reading the interview (great job btw) I can see he’s utilizing Claude Code in the correct way. As someone whose contracting day job is to code review and report on the various fuck ups companies make utilizing AI him stating it’s more used as a sort of rubber duck or peer programming is honestly, like it or not, the correct way to utilize these tools.
Now him stating that he hopes Anthropic won’t feed on what he’s produced…I wouldn’t bet on it bud. your code base has already been utilized.
Ai is becoming a very good tool in the.software industry. I think people are going to have to really consider their AI stance and really hone in on what they actually find to be the unethical parts because it will be so widespread and you need to fight against its parts instead of it as a whole.
For me the copyright asymmetry and hostile integration with existing life. I dont want to live in a world where openAI can train a model off all works but i can’t do the same. I dont want openAI to scrape every website relentlessly while I get blocked from scraping any large website.
For power usage I dont care. Thats a local government issue. They choose to let an ai data center drive up costs and water usage then they suck and I’ll hate them for approving that. Theres plenty of places to put a data center where power isnt an issue.
For art Its awful because its trained unconsensually off artists works and two because it has no intention behind its creation. Ive come to believe that the reason we appreciate art is because of the human intention that goes into its creation. Thats why there is objectively bad art that we resonante with more than a perfect still life because the artist has a story alongside the piece and gives it unique value that ai could never truely replicate.
This is why I can accept AI usage in software development and still hate AI. If its built off an open source model its fine but i dont want to support development using these closed source models and end up in a world where american megacorps control the tools to create software.
Cancelled my patreon membership over this
deleted by creator
If you’re a software Engineer and you’re not using AI, there’s a high chance you will be unable to find good work in the near future.
Software engineering has always been a race to stay on top of the latest trends in technology to stay relevant in the market.
Expecting engineers to not use AI on their own pet projects is unrealistic.
As someone who’s job involved VB6 just two years ago. I think we have a very different experience of software development. Sure there’s some companies who rush to the newest, but for others that costs money, they expect you to just keep plodding along with the tools you have.
If you were using VB6 just 2 years ago then it’s pretty clear to me that you were in a niche, and the exception rather than the rule.
My state’s lottery team still use cobol, my buddy is working in fortran, and many sites I touched are running on php7 (discontinued in Nov 22). Not sure it’s really all that niche to work on old tech.






