• Gawdl3y@pawb.social
    link
    fedilink
    English
    arrow-up
    36
    ·
    edit-2
    15 hours ago

    The alternative here is they don’t allow it and get a bunch of MRs sneakily using AI anyway but not disclosing it. I’d rather be aware that an MR was made with AI than not, personally, so I think this is probably the right move.

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      12 hours ago

      I mean also shouldn’t somebody be reviewing these MRs? I’m an infra guy not a programmer but doesn’t it like, not really matter how the code in the MR was made as long as it’s reviewed and validated?

      • calcopiritus@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        The problem with that is that reviewing takes time. Valuable maintainer time.

        Curl faced this issue. Hundreds of AI slop “security vulnerabilities” were submitted to curl. Since they are security vulnerabilities, they can’t just ignore them, they had to read every one of them, only to find out they weren’t real. Wasting a bunch of time.

        Most of the slop was basically people typing into chatgpt “find me a security vulnerability of a project that has a bounty for finding one” and just copy-pasting whatever it said in a bug report.

        With simple MRs at least you can just ignore the AI ones an priorize the human ones if you don’t have enough time. But that will just lead to AI slop not being marked as such in order to skip the low-prio AI queue.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      If one wants to avoid software with AI code then being aware which MRs need replacing helps. However, accepting it encourages it more and makes it less fesible that you could prune all the MRs written in part by AI. Disclosing it will become worthless if it becomes the norm.

      • Attacker94@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        If the code is good I don’t have an issue with it being merged even if ai was used, that being said I bet the obvious outcome is that either people ignore the policy and nothing changes or they comply and most reviewers focus on the non-ai group which is how it was before ai. All in all, this decision can never hurt the development, since as far as I am aware there is no requirement to review an MR.

    • Kay Ohtie@pawb.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      20 hours ago

      I hate that this is the most accurate answer almost certainly. Maybe it’ll shame people into not submitting more often than it would’ve for people sneaking it in.