This seems like an ill-thought-out decision, especially in a landscape where Linux should be differentiating itself from, and not following Windows.
The titular “slop” just means “bad AI generated code is banned” but the definition of “bad” is as vague as Google’s “don’t be evil.” Good luck enforcing it, especially in an open-source project where people’s incentives aren’t tied to a paycheck.
Title is also inaccurate regarding CoPilot (the Microsoft brand AI tool), as a comment there mentions
says yes to Copilot
Where in the article does it say that?? The only mention of CoPilot is where it talks about LLM-generated code having unverifiable provenance.
Reply
Not necessarily Big Tech’s AI, but “a program” that can automate this part of the PR process. I’m not interested in a program that gives pointless or bad suggestions. I’m interested in a program that can spot pattern X which I always say “this is bad because Y” and print that for me. If it were easy to write a classic program to do this, I would have written it. If that’s easy with LLMs, I’d train my local Qwen or whatever to do it. Not a faceless corpo that runs this on gas turbines, poisoning people around them and lying to me abt how much it costs me.
If you think “bad” is too vague, then that isnt a new problem.
Linux has always had to reject ‘bad’ code submissons - what’s new here is that the kernel team isnt willing to prejudice all AI code as “bad”, even if that would be easier.
This seems like an ill-thought-out decision, especially in a landscape where Linux should be differentiating itself from, and not following Windows.
The titular “slop” just means “bad AI generated code is banned” but the definition of “bad” is as vague as Google’s “don’t be evil.” Good luck enforcing it, especially in an open-source project where people’s incentives aren’t tied to a paycheck.
Title is also inaccurate regarding CoPilot (the Microsoft brand AI tool), as a comment there mentions
Google’s “don’t be evil” was like a warrant canary. It didn’t need to be precise, it just needed to be there.
They’re already enforcing it. PRs are reviewed and bad ones are rejected all the time.
I also want to say that Linus is still the one merging things into the kernel and he is ahm… opinionated?
It’s also probably possible to teach an agent this opinion to help review.
So you’re advocating in favor of more AI in more steps of the process?
Not necessarily Big Tech’s AI, but “a program” that can automate this part of the PR process. I’m not interested in a program that gives pointless or bad suggestions. I’m interested in a program that can spot pattern X which I always say “this is bad because Y” and print that for me. If it were easy to write a classic program to do this, I would have written it. If that’s easy with LLMs, I’d train my local Qwen or whatever to do it. Not a faceless corpo that runs this on gas turbines, poisoning people around them and lying to me abt how much it costs me.
If you think “bad” is too vague, then that isnt a new problem.
Linux has always had to reject ‘bad’ code submissons - what’s new here is that the kernel team isnt willing to prejudice all AI code as “bad”, even if that would be easier.