97
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Aug 2025
97 points (85.9% liked)
Linux
9040 readers
494 users here now
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
Same with human generated code. AI bug are not magically more creative than human bugs. If the code is not readable/doesn't follow conventions you reject it regardless of what generated it.
You don't need official policy to reject a barrage of AI slop patches. If you receive to many patches to process you change the submission process. It doesn't matter if the patches are AI slop or not.
Spamming maintainers is obviously bad but saying that anything AI generated in the kernel is a problem in itself is bullshit.
I never said that.
You may think that, but preliminary controlled studies do show that more security vulns appear in code written by a programmer who used an AI assistant: https://dl.acm.org/doi/10.1145/3576915.3623157
More research is needed of course, but I imagine that because humans are capable of more sophisticated reasoning than LLMs, the process of a human writing the code and deriving an implementation from a human mind is what leads to producing, on average, more robust code.
I'm not categorically opposed to use of LLMs in the kernel but it is obviously an area where caution needs to be exercised, given that it's for a kernel that millions of people use.