• XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    6
    ·
    2 days ago

    This seems like an ill-thought-out decision, especially in a landscape where Linux should be differentiating itself from, and not following Windows.

    The titular “slop” just means “bad AI generated code is banned” but the definition of “bad” is as vague as Google’s “don’t be evil.” Good luck enforcing it, especially in an open-source project where people’s incentives aren’t tied to a paycheck.

    Title is also inaccurate regarding CoPilot (the Microsoft brand AI tool), as a comment there mentions

    says yes to Copilot

    Where in the article does it say that?? The only mention of CoPilot is where it talks about LLM-generated code having unverifiable provenance. Reply

    • Naich@piefed.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Google’s “don’t be evil” was like a warrant canary. It didn’t need to be precise, it just needed to be there.

    • Avid Amoeba@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      2 days ago

      They’re already enforcing it. PRs are reviewed and bad ones are rejected all the time.

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          2 days ago

          It’s also probably possible to teach an agent this opinion to help review.

          • XLE@piefed.social
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            1 day ago

            So you’re advocating in favor of more AI in more steps of the process?

            • Avid Amoeba@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              10 hours ago

              Not necessarily Big Tech’s AI, but “a program” that can automate this part of the PR process. I’m not interested in a program that gives pointless or bad suggestions. I’m interested in a program that can spot pattern X which I always say “this is bad because Y” and print that for me. If it were easy to write a classic program to do this, I would have written it. If that’s easy with LLMs, I’d train my local Qwen or whatever to do it. Not a faceless corpo that runs this on gas turbines, poisoning people around them and lying to me abt how much it costs me.

    • anarchiddy@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      2 days ago

      If you think “bad” is too vague, then that isnt a new problem.

      Linux has always had to reject ‘bad’ code submissons - what’s new here is that the kernel team isnt willing to prejudice all AI code as “bad”, even if that would be easier.