A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      16 hours ago

      A rational person would question why they have beliefs that, when confronted with evidence against those beliefs they believe the evidence is wrong and not their beliefs.

      It could indicate that the person’s beliefs are not built on rational grounds.

      • thedeadwalking4242@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 hours ago

        Because in my personal experience through use 25% doesn’t seem quite right.

        Besides these companies have a monetary incentive to ensure LLMs show high numbers on these tests. One of the most widely use tests (bench verified) is itself a currated selection of problems. In real world usage the failure rate is going to be much higher.

        A rational person trust but verifies, and at least for me the verification doesn’t hold up to even a tiny bit of scrutiny so having doubts is a perfectly healthy thing to do.

        Just because someone disagrees with with a data source does not make them irrational. There are some extremely well verified truths that are irrational to dismiss but not all data sources / studies have had that amount of rigor applied against them. Data can tell a story, but it doesn’t always tell the whole truth. People manipulate data to their own benifit.

        People confuse the scientific method and academic research for “this one academic source says this it must be true” when really you need more then that.