A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • antihumanitarian@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    2
    ·
    2 days ago

    If you’re honestly asking, LLMs are much better at coding than any other skill right now. On one hand there’s a ton of high quality open source training data that appropriated, on the other code is structured language so is very well suited for what models “are”. Plus, code is mechanically verifiable. If you have a bunch of tests, or have the model write tests, it can check its work as it goes.

    Practically, the new high end models, GPT 5.4 or Claude Opus 4.6, can write better code faster than most people can type. It’s not like 2 years ago when the code mostly wouldn’t build, rather they can write hundreds or thousands of lines of code that works first try. I’m no blind supporter of AI, and it’s very emotionally complicated watching it after years honing the craft, but for most tasks it’s simple reality that you can do more with AI than without it. Whether it’s higher quality, higher volume, or integrating knowledge you don’t have.

    Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.

    • Venia Silente@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Professionally I don’t feel like I have a choice, if I want to stay employed in the field at least.

      On the contrary!

      I’ve seen quite a number of “AI cleanup specialist” job offerings so far, and even a few consulting positions on training juniors away from using AI in development.

      (No, I have not seen any position open on training management away from using AI…)

    • Arkthos@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Now software architecture on the other hand? Oh boy Claude Opus and the rest suck ass at that.

      My own experience has been that if you have relatively isolated discrete chunks of code it works pretty well, and it’s really nice at reviewing as well. Just unleashing it on a code base and you’ll end up with a massive mess.