The alternative here is they don't allow it and get a bunch of MRs sneakily using AI anyway but not disclosing it. I'd rather be aware that an MR was made with AI than not, personally, so I think this is probably the right move.
I mean also shouldn’t somebody be reviewing these MRs? I’m an infra guy not a programmer but doesn’t it like, not really matter how the code in the MR was made as long as it’s reviewed and validated?
The problem with that is that reviewing takes time. Valuable maintainer time.
Curl faced this issue. Hundreds of AI slop "security vulnerabilities" were submitted to curl. Since they are security vulnerabilities, they can't just ignore them, they had to read every one of them, only to find out they weren't real. Wasting a bunch of time.
Most of the slop was basically people typing into chatgpt "find me a security vulnerability of a project that has a bounty for finding one" and just copy-pasting whatever it said in a bug report.
With simple MRs at least you can just ignore the AI ones an priorize the human ones if you don't have enough time. But that will just lead to AI slop not being marked as such in order to skip the low-prio AI queue.
Oh, come the fuck on…
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.