29
you are viewing a single comment's thread
view the rest of the comments
[-] Pika@sh.itjust.works 12 points 3 days ago* (last edited 3 days ago)

If the bug was actually legitimate, and was verified, I don't think its a good idea to just wait till someone actually experiences it.

Of course this depends on the severity of the bug as well. In the case of this article, he was refusing to submit anything until he actually verified it, but he defo was using the AI as a origin of discovery.

I would prefer those types of reports over blanket AI vulnerability reports that aren't proven. Discrediting a valid bug because it was not human generated may lessen workflow, but it's at the cost of your software's security and reliability.

I agree I would throw out reports that are AI driven & not proven, but if someone did the actual PoC and demonstrated actual risk I wouldn't care if it was originally AI or not. I would just assign it based off severity like normal.

[-] FauxLiving@lemmy.world 3 points 2 days ago

Letting your users get hacked just to own the AIs is certainly a strategy.

this post was submitted on 03 Apr 2026
29 points (73.0% liked)

Linux

13164 readers
975 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS