108
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 15 Dec 2025
108 points (100.0% liked)
Linux
10655 readers
450 users here now
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
If the lowest paid intern gets to use AI, then it will probably help them configure it properly.. the docs generally aren't bad (of the ones I've seen/used), but they're not newbie/intern level docs.
edit: That's a lot of downvotes for suggesting AI is useful at a primarily language/grammatical problem (ie. helping to crafting security/sandbox policy DSLs from terse docs and examples)? I detect some gut-reaction insecurities in these parts. ๐
I barely trust natural intelligence with anything relating to security.
"Trust but verify" ... which just means doing due diligence as a professional, whether the crap^H^H^H^Hquality code and documentation is written by a human or AI.
Humans are incredibly good at saying dumb shit while making it seem like it could be the right thing, but LLMs are arguably better at it.
And you, and I, and everyone here, will fall for it... not always, but too often. We are all lazy thinkers by nature.
Perhaps.
Of course, the creators of the security modules could build tools to help them be used better. Maybe not on first release, but at least after other complain about the difficulty. AppArmor did attempt some tools, and is far better than SELinux. Still not great.