29
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 02 Sep 2023
29 points (100.0% liked)
Programming
13361 readers
1 users here now
All things programming and coding related. Subcommunity of Technology.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 1 year ago
MODERATORS
Well hopefully you can't harm your computer with userland programs. Windows is perhaps a bit messy at this, generally, but Unix-like systems have pretty good protections against non-superusers interfering with either the system itself, or other users on the system.
Having drivers run in the kernel and applications run in userland also means unintentional application errors generally won't crash your entire system. Which is pretty important..
Windows 7 and later, have even better anti-non-superuser protections than Unix-like systems. It's taken a while for Linux to add a capabilities permission system to limit superusers, something that's been available on Windows all the time.
Er, selinux was released nearly a decade before Windows 7, and was integrated into mainline just a few years later, even before vista added UAC.
Big difference between "not available" and "often not enabled".
Windows 95 already had an equivalent of selinux in the policy editor, "often not enabled". UAC is the equivalent of sudo, previously "not available".
Windows 7 also had runtime driver and executable signature testing ("not available" on Linux), virtual filesystem views for executables ("not available" on Linux), overall system auditing ("often not enabled" on Linux), an outbound per-executable firewall ("not available" on Linux), extended ACLs for the filesystem ("often not enabled" and in part "not available" on Linux)... and so on.
Now, Linux is great, it had a much more solid kernel model from the beginning, and being OpenSource allows having a purpose-built kernel for either security, flexibility, tinkerability, or whatever. But it's still lacking several security features from Windows, which are useful in a generalistic system that allows end-users to run random software.
Android had to fix those shortcomings by pushing most software into a JVM, while Flatpak is getting popular on Linux. Modern Windows does most of that transparently... at a hit to performance... and doesn't let you opt-out, which angers tinkerers... but those are the drawbacks of security.
dd if=/dev/zero of=/dev/sda
is a userland program, which I would say causes harm./dev/sda
access requires superuser/root permissions from the kernel, which means asking the kernel to lift many of the protections.On some unix systems (MacOS for example) you can't even do that with root.
You'd need reboot into firmware, change some flags on the boot partition, and then reboot back into the regular operating system.
To install a new version of the operating system on a Mac, it creates a new snapshot of your boot hard drive, updates the system there, then reboots instructing the firmware to reboot on the new snapshot. The firmware does it's a few checks of it's own as well, and if it fails to boot then it will reboot on the old snapshot (which is only removed after successfully booting on to the new one). That's not only a better/more reliable way to upgrade the operating system, it's also the only way it can be done because even the kernel doesn't have write access to those files.
The only drawback is you can't use your computer while the firmware checks/boots the updated system. But Apple seems to be laying the foundations for a new process where your updated operating system will boot alongside the old version (with hypervisors) in the background, be fully tested/etc, and then it should be able to switch over to the other operating system pretty much instantly. It would likely even replace the windows of running software with a screenshot, then instruct the software to save it's state and relaunch to restore functionality to the screenshot windows (they already do this if a Mac's battery runs really low - closing everything cleanly before power cuts out, then restore everything once you charge the battery).
That's interesting, I don't have much contact with Apple's ecosystem.
Sounds similar to a setup that Linux allows, with the root filesystem on btrfs, making a snapshot of it and updating, then live switching kernels. But there is no firmware support to make the switch, so it relies on root having full access to everything.
The hypervisors approach seem like what Windows is doing, where Windows itself gets booted in a Hyper-X VM, allowing WSL2 and every other VM to run at "native" speed (since "native" itself is a VM), and in theory should allow booting a parallel updated Windows, then just switching VMs.
On Linux there is also a feature for live migrating VMs, which allows software to keep running while they're being migrated with just a minimum pause, so they could use something like that.
Yes, which is literally what OP is asking about. They mention system calls, and are asking, if a userland program can do dangerous thing using system calls, why is there a divide between user and kernel. "Because the kernel can then check permissions of the system call" is a great answer, but "hopefully you can't harm your computer with userland programs" is completely wrong and misguided.
Yeah, security is in layers and userland isn't automatically "safe", if that's what you're pointing out. So I did mention non-superusers. Separating the kernel from userland applications is also critically important to (try to) prevent non-superusers from accessing APIs and devices which only superusers (or those in particular groups) are able to reach.