A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:

It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.

There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.

I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.

Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.

  • P03 Locke@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    11
    ·
    edit-2
    18 hours ago

    the problem is that if I have to nanny it to make sure it doesn’t make a mistake then how is it a useful product?

    When was the last time you coded something perfectly? “If I have to nanny you to make sure you don’t make a mistake, then how are you a useful employee?” See how that doesn’t make sense. There’s a reason why good development shops live on the backs of their code reviews and review practices.

    The math ain’t matching on this one.

    The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.

    There’s also something to be said about the value in being able to tell an LLM to go chew on some code and tests for 10 minutes while I go make a sandwich. I get to make my sandwich, and come back, and there’s code there. I still have to review it, point out some mistakes, and then go back and refill my drink.

    And there’s so much you can customize with personal rules. Don’t like its coding style? Write Markdown rules that reflect your own style. Have issues with it tripping over certain bugs? Write rules or memories that remind it to be more aware of those bugs. Are you explaining a complex workflow to it over and over again? Explain it once, and tell it to write the rules file for you.

    All of that saves more and more time. The more rules you have for a specific project, the more knowledge it retains on how code for that project, and the more experience you gain in how to communicate to an entity that can understand your ideas. You wouldn’t believe how many people can’t rubberduck and explain proper concepts to people, much less LLMs.

    LLMs are patient. They don’t give a shit if you keep demanding more and more tweaks and fixes, or if you have to spend a bit of time trying to explain a concept. Human developers would get tired of your demands after a while, and tell you to fuck off.

    • BorgDrone@feddit.nl
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      The math is just fine. Code reviews, even audit-level thorough ones, cost far less time than doing the actual coding.

      But the problem never was typing in the actual code. The majority of coding is understanding the problem you’re trying to solve and figuring out a good solution. If you let the AI do the thinking for you, then you’re building AI slop. You can’t review your way out of it because a proper review still requires that level of understanding the problem. If you just let the AI do the typing for you, there’s very little to be gained there as the time spent typing is negligible.

      AI may be good at building simple, boilerplate-level code. But that’s what we have junior developers for. Junior developers we need because they grow into medior and senior developers.

      • r1veRRR@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 hours ago

        This really depends on the project. For example, if you’re creating a CRUD web app for managing some kind of data, the main tough decisions involve system and data architecture. After that, most other work is straight forward menial work. It doesn’t take a genius to validate a gajillion text fields for a specific min and max length, map them to the correct field in the API, validate on the server again, and write them to the correct database field.

        I agree that AI might screw companies over in the long run, when there’s no more juniors that can become seniors. That doesn’t apply to this case at all.

      • P03 Locke@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        9 hours ago

        If you let the AI do the thinking for you, then you’re building AI slop.

        No, for major projects, you start out with a plan. I may spend upwards of 2-3 hours just drafting a plan with the LLM, figuring out options, asking questions when it’s an area I don’t have top-familiarity with, crafting what the modules are going to look like. It’s not slop when you’re planning out what to do and what your end result is supposed to be.

        We are not the same

        People who talk this way have zero experience with actually using LLMs, especially coding models.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 hour ago

          Oh so I didn’t vibe code a go program that I have no understanding of the language cause I knew what I wanted the program to do in the end. Got you I am now a go developer. I didn’t just ask the ai to do something I new which library I wanted it to use and new what I wanted it to interface with and new exactly what I wanted it to do.