142
submitted 5 months ago* (last edited 5 months ago) by SorteKanin@feddit.dk to c/linux@programming.dev

One big difference that I've noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.

For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.

When this happens on Windows, I've never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I've never really thought about it.

However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that's left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.

I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn't seem to do a similar thing (or doesn't do it as well).

Is this an inherent problem of Linux at the moment or can I do something to improve this? I'm on Kubuntu 24.04 if it matters. Also, I don't believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I've also tried disabling swap and it doesn't seem to make a difference.

EDIT: Tried nice -n +19, still lags my other programs.

EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it's placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it's placebo. But anyways, I tried compiling again and it still lags my other stuff.

top 50 comments
sorted by: hot top controversial new old
[-] scsi@lemm.ee 59 points 5 months ago

The Linux kernel uses the CPU default scheduler, CFS, a mode that tries to be fair to all processes at the same time - both foreground and background - for high throughput. Abstractly think "they never know what you intend to do" so it's sort of middle of the road as a default - every CPU cycle of every process gets a fair tick of work unless they've been intentionally nice'd or whatnot. People who need realtime work (classic use is for audio engineers who need near-zero latency in their hardware inputs like a MIDI sequencer, but also embedded hardware uses realtime a lot) reconfigure their system(s) to that to that need; for desktop-priority users there are ways to alter the CFS scheduler to help maintain desktop responsiveness.

Have a look to Github projects such as this one to learn how and what to tweak - not that you need to necessarily use this but it's a good point to start understanding how the mojo works and what you can do even on your own with a few sysctl tweaks to get a better desktop experience while your rust code is compiling in the background. https://github.com/igo95862/cfs-zen-tweaks (in this project you're looking at the set-cfs-zen-tweaks.sh file and what it's tweaking in /proc so you can get hints on where you research goals should lead - most of these can be set with a sysctl)

There's a lot to learn about this so I hope this gets you started down the right path on searches for more information to get the exact solution/recipe which works for you.

[-] 0x0@programming.dev 28 points 5 months ago

I'd say nice alone is a good place to start, without delving into the scheduler rabbit hole...

[-] scsi@lemm.ee 15 points 5 months ago

I would agree, and would bring awareness of ionice into the conversation for the readers - it can help control I/O priority to your block devices in the case of write-heavy workloads, possibly compiler artifacts etc.

[-] SorteKanin@feddit.dk 24 points 5 months ago

"they never know what you intend to do"

I feel like if Linux wants to be a serious desktop OS contender, this needs to "just work" without having to look into all these custom solutions. If there is a desktop environment with windows and such, that obviously is intended to always stay responsive. Assuming no intentions makes more sense for a server environment.

[-] BearOfaTime@lemm.ee 19 points 5 months ago

Even for a server, the UI should always get priority, because when you gotta remote in, most likely shit's already going wrong.

[-] SirDimples@programming.dev 12 points 5 months ago

Totally agree, I've been in the situation where a remote host is 100%-ing and when I want to ssh into it to figure out why and possibly fix it, I can't cause ssh is unresponsive! leaving only one way out of this, hard reboot and hope I didn't lose data.

This is a fundamental issue in Linux, it needs a scheduler from this century.

load more comments (1 replies)
[-] 1984@lemmy.today 13 points 5 months ago

100% agree. Desktop should always be a strong priority for the cpu.

[-] UnculturedSwine@lemmy.world 6 points 5 months ago

One of my biggest frustrations with Linux. You are right. If I have something that works out of the box on windows but requires hours of research on Linux to get working correctly, it's not an incentive to learn the complexities of Linux, it's an incentive to ditch it. I'm a hobbyist when it comes to Linux but I also have work to do. I can't be constantly ducking around with the OS when I have things to build.

load more comments (3 replies)
[-] msage@programming.dev 5 points 5 months ago

Wasn't CFS replaced in 6.6 with EEDVF?

I have the 6.6 on my desktop, and I guess the compilations don't freeze my media anymore, though I have little experience with it as of now, need more testing.

load more comments (1 replies)
[-] lupec@lemm.ee 25 points 5 months ago* (last edited 5 months ago)

Responsiveness for typical everyday usage is one of the main scenarios kernels like Zen/Liquorix and their out of the box scheduler configurations are meant to improve, and in my experience they help a lot. Maybe give them a go sometime!

Edit: For added context, I remember Zen significantly improving responsiveness under heavy loads such as the one OP is experiencing back when I was experimenting with some particularly computationally intensive tasks

[-] thingsiplay@beehaw.org 11 points 5 months ago

https://github.com/zen-kernel/zen-kernel/wiki/Detailed-Feature-List

That's the reason I installed Zen too and use it as the default. While Zen is meant to improve responsiveness of interactive usage on the system, it comes at a price. The overall performance might decrease and it should require more power. But if someone needs to solve the problem of the OP (need to work on the computer while under heavy load), then Zen is probably the right tool. Some distributions have the Zen Kernel in their repository and the install process is straightforward.

load more comments (1 replies)
[-] crispy_kilt@feddit.de 21 points 5 months ago

nice +5 cargo build

nice is a program that sets priorities for the CPU scheduler. Default is 0. Goes from -19, which is max prio, to +19 which is min prio.

This way other programs will get CPU time before cargo/rustc.

load more comments (14 replies)
[-] arendjr@programming.dev 18 points 5 months ago

The System76 scheduler helps to tune for better desktop responsiveness under high load: https://github.com/pop-os/system76-scheduler I think if you use Pop!OS this may be set up out-of-the-box.

load more comments (1 replies)
[-] mcmodknower@programming.dev 18 points 5 months ago

You could try using nice to give the rust compiler less priority (higher number) for scheduling.

[-] SorteKanin@feddit.dk 4 points 5 months ago

This seems too complicated if I need to do that for other programs as well.

load more comments (1 replies)
[-] amanda@aggregatet.org 16 points 5 months ago* (last edited 5 months ago)

Lots of bad answers here. Obviously the kernel should schedule the UI to be responsive even under high load. That’s doable; just prioritise running those over batch jobs. That’s a perfectly valid demand to have on your system.

This is one of the cases where Linux shows its history as a large shared unix system and its focus as a server OS; if the desktop is just a program like any other, who’s to say it should have more priority than Rust?

I’ve also run into this problem. I never found a solution for this, but I think one of those fancy new schedulers might work, or at least is worth a shot. I’d appreciate hearing about it if it does work for you!

Hopefully in a while there are separate desktop-oriented schedulers for the desktop distros (and ideally also better OOM handlers), but that seems to be a few years away maybe.

In the short term you may have some success in adjusting the priority of Rust with nice, an incomprehensibly named tool to adjust the priority of your processes. High numbers = low priority (the task is “nicer” to the system). You run it like this: nice -n5 cargo build.

load more comments (4 replies)
[-] possiblylinux127@lemmy.zip 12 points 5 months ago* (last edited 5 months ago)

It really depends on your desktop. For instance gnome handles high CPU very well in my experience.

I would run your compiler in a podman container with a CPU cap.

Edit: it might be related to me using Fedora

[-] JATtho@lemmy.world 12 points 5 months ago

"The kernel runs out of time to solve the NP-complete scheduling problem in time."

More responsiveness requires more context-switching, which then subtracts from the available total CPU bandwidth. There is a point where the task scheduler and CPUs get so overloaded that a non-RT kernel can no longer guarantee timed events.

So, web browsing is basically poison for the task scheduler under high load. Unless you reserve some CPU bandwidth (with cgroups, etc.) beforehand for the foreground task.

Since SMT threads also aren't real cores (about ~0.4 - 0.7 of an actual core), putting 16 tasks on a 16/8 machine is only going to slow down the execution of all other tasks on the shared cores. I usually leave one CPU thread for "housekeeping" if I need to do something else. If I don't, some random task is going to be very pleased by not having to share a core. That "spare" CPU thread will be running literally everything else, so it may get saturated by the kernel tasks alone.

nice +5 is more of a suggestion to "please run this task with a worse latency on a contended CPU.".

(I think I should benchmark make -j15 vs. make -j16 to see what the difference is)

[-] SorteKanin@feddit.dk 9 points 5 months ago

That's all fine, but as I said, Windows seems to handle this situation without a hitch. Why can Windows do it when Linux can't?

Also, it sounds like you suggest there is a tradeoff between bandwidth and responsiveness. That sounds reasonable. But shouldn't Linux then allow me to easily decide where I want that tradeoff to lie? Currently I only have workarounds. Why isn't there some setting somewhere to say "Yes, please prioritise responsiveness even if it reduces bandwidth a little bit". And that probably ought to be the default setting. I don't think a responsive UI should be questioned - that should just be a given.

[-] FizzyOrange@programming.dev 5 points 5 months ago

You're right of course. I think the issue is that Linux doesn't care about the UI. As far as it is concerned GUI is just another program. That's the same reason you don't have things like ctrl-alt-del on Linux.

[-] JATtho@lemmy.world 5 points 5 months ago

To be fair, there should be some heuristics to boost priority of anything that has received input from the hardware. (a button click e.g.) The no-care-latency jobs can be delayed indefinitely.

load more comments (5 replies)
[-] sunzu@kbin.run 10 points 5 months ago

I face similar issue when updating steam games although I think that's related to disk read write

But either way, issues like these gonna need to be address before we finally hit the year of Linux desktop lol

[-] tatterdemalion@programming.dev 10 points 5 months ago* (last edited 5 months ago)

Sounds like Kubuntu's fault to me. If they provide the desktop environment, shouldn't they be the ones making it play nice with the Linux scheduler? Linux is configurable enough to support real-time scheduling.

FWIW I run NixOS and I've never experienced lag while compiling Rust code.

[-] SorteKanin@feddit.dk 8 points 5 months ago

I have a worrying feeling that if I opened a bug for the KDE desktop about this, they'd just say it's a problem of the scheduler and that's the kernel so it's out of their hands. But maybe I should try?

[-] haui_lemmy@lemmy.giftedmc.com 11 points 5 months ago

The kde peeps are insanely nice so I guess you should try.

[-] cbazero@programming.dev 8 points 5 months ago

If you compile on windows server the same problem happens. The server is basically gone. So there seems to be some special scheduler configuration in windows client os.

load more comments (2 replies)
[-] SorteKanin@feddit.dk 7 points 5 months ago

So I just tried using nice -n +19 and it still lags my browser and my UI. So that's not even a good workaround.

load more comments (3 replies)
[-] odium@programming.dev 6 points 5 months ago* (last edited 5 months ago)
[-] SorteKanin@feddit.dk 4 points 5 months ago

I don't really want to limit the Rust compiler. If I leave my computer running while I take a break, I don't want it to artificially throttle the compiler. I just want user input and responsiveness of open windows to take priority over the compiler.

load more comments (1 replies)
[-] r_deckard@lemmy.world 6 points 5 months ago

Firefox on my raspberry pi grinds the thing to a halt, so I created a shortcut:

systemd-run --scope -p MemoryLimit=500M -p CPUQuota=50% firefox-esr

You say it doesn't top out on memory, so you don't need the -p MemoryLimit=500M parameter. Set your compiler CPUQuota to maybe 80%, or whatever you can work out with trial and error.

[-] RatsOffToYa@lemmy.world 5 points 5 months ago

All the comments here are great. One other suggestion I didn't see: use chrt to start the build process with the sched_batch policy. It's lower than sched_other, which most processes will be in, so the compilation processes should be bumped off the CPU for virtually everyone else

[-] BB_C@programming.dev 4 points 5 months ago

This hasn't been my experience when no swapping is involved (not a concern for me anymore with 32GiB physical RAM with 28GiB zram).

And I've been Rusting since v1.0, and Linuxing for even longer.

And my setup is boring (and stable), using Arch's LTS kernel which is built with CONFIG_HZ=300. Long gone are the days of running linux-ck.

Although I do use craneleft backend now day to day, so compiles don't take too long anyway.

load more comments (1 replies)
[-] kenkenken@sh.itjust.works 4 points 5 months ago

Linux defaults are optimized for performance and not for desktop usability.

[-] SorteKanin@feddit.dk 8 points 5 months ago* (last edited 5 months ago)

If that is the case, Linux will never be a viable desktop OS alternative.

Either that needs to change or distributions targeting desktop needs to do it. Maybe we need desktop and server variants of Linux. It kinda makes sense as these use cases are quite different.

EDIT: I'm curious about the down votes. Do people really believe that it benefits Linux to deprioritise user experience in this way? Do you really think Linux will become an actual commonplace OS if it keeps focusing on "performance" instead of UX?

load more comments (7 replies)
load more comments
view more: next ›
this post was submitted on 18 Jun 2024
142 points (96.1% liked)

Linux

5380 readers
30 users here now

A community for everything relating to the linux operating system

Also check out !linux_memes@programming.dev

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 1 year ago
MODERATORS