• Serinus@lemmy.world
    link
    fedilink
    arrow-up
    38
    arrow-down
    3
    ·
    13 hours ago

    People have trouble with the middle ground. AI is useful in coding. It’s not a full replacement. That should be fine, except you’ve got the ai techbros and CEOs on one end thinking it will replace all labor, and the you’ve got the backlash to that on the other end that want to constantly talk about how useless it is.

    • brianpeiris@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      5 hours ago

      I suspect the problem is that there are many developers nowadays who don’t care about code quality, actual engineering, and maintenance. So the people who are complaining are right to be concerned that there is going to be a ton of slop code produced by AI-bro developers, and the developers who actually care will be left to deal with the aftermath. I’d be very happy if lead developers are prepared to try things with AI, and importantly to throw the output away if it doesn’t meet coding standards. Instead I think even lead developers and CTOs are chasing “productivity” metrics, which just translates to a ton of sloppy code.

      • Serinus@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        5 hours ago

        Yeah, I don’t plan to leave in two years, so I’m motivated to not say “oh fuck” when I have to maintain the thing I built later.

        Plus, you know, I don’t want people to groan when they have to work on my code.

      • MinnesotaGoddam@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        10 hours ago

        the times i trust LLMs: when i am using it to look up stuff i have already learned, but i can’t remember and just need to refresh my memory. there’s no point memorizing shit i can look up and am not going to use regularly, and i’m the effective guardrail against the LLMs being wrong when I’m using them.

        the times i don’t trust the LLMs: all the other times. if i can’t effectively verify the information myself, why am i going to an unreliable source?

        having to explain that nuance over and over, it’s just shorter and easier to say the llm is an unreliable source. which it is. when i’m not doing lazy output, it doesn’t need testing (it still gets at least 2 reviews, but the last time those reviews caught anything was years ago). the llm’s output always needs testing.