this post was submitted on 18 Jun 2024
143 points (96.1% liked)

Linux

5490 readers
766 users here now

A community for everything relating to the linux operating system

Also check out [email protected]

Original icon base courtesy of [email protected] and The GIMP

founded 2 years ago
MODERATORS
 

One big difference that I've noticed between Windows and Linux is that Windows does a much better job ensuring that the system stays responsive even under heavy load.

For instance, I often need to compile Rust code. Anyone who writes Rust knows that the Rust compiler is very good at using all your cores and all the CPU time it can get its hands on (which is good, you want it to compile as fast as possible after all). But that means that for a time while my Rust code is compiling, I will be maxing out all my CPU cores at 100% usage.

When this happens on Windows, I've never really noticed. I can use my web browser or my code editor just fine while the code compiles, so I've never really thought about it.

However, on Linux when all my cores reach 100%, I start to notice it. It seems like every window I have open starts to lag and I get stuttering as the programs struggle to get a little bit of CPU that's left. My web browser starts lagging with whole seconds of no response and my editor behaves the same. Even my KDE Plasma desktop environment starts lagging.

I suppose Windows must be doing something clever to somehow prioritize user-facing GUI applications even in the face of extreme CPU starvation, while Linux doesn't seem to do a similar thing (or doesn't do it as well).

Is this an inherent problem of Linux at the moment or can I do something to improve this? I'm on Kubuntu 24.04 if it matters. Also, I don't believe it is a memory or I/O problem as my memory is sitting at around 60% usage when it happens with 0% swap usage, while my CPU sits at basically 100% on all cores. I've also tried disabling swap and it doesn't seem to make a difference.

EDIT: Tried nice -n +19, still lags my other programs.

EDIT 2: Tried installing the Liquorix kernel, which is supposedly better for this kinda thing. I dunno if it's placebo but stuff feels a bit snappier now? My mouse feels more responsive. Again, dunno if it's placebo. But anyways, I tried compiling again and it still lags my other stuff.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 59 points 6 months ago (4 children)

The Linux kernel uses the CPU default scheduler, CFS, a mode that tries to be fair to all processes at the same time - both foreground and background - for high throughput. Abstractly think "they never know what you intend to do" so it's sort of middle of the road as a default - every CPU cycle of every process gets a fair tick of work unless they've been intentionally nice'd or whatnot. People who need realtime work (classic use is for audio engineers who need near-zero latency in their hardware inputs like a MIDI sequencer, but also embedded hardware uses realtime a lot) reconfigure their system(s) to that to that need; for desktop-priority users there are ways to alter the CFS scheduler to help maintain desktop responsiveness.

Have a look to Github projects such as this one to learn how and what to tweak - not that you need to necessarily use this but it's a good point to start understanding how the mojo works and what you can do even on your own with a few sysctl tweaks to get a better desktop experience while your rust code is compiling in the background. https://github.com/igo95862/cfs-zen-tweaks (in this project you're looking at the set-cfs-zen-tweaks.sh file and what it's tweaking in /proc so you can get hints on where you research goals should lead - most of these can be set with a sysctl)

There's a lot to learn about this so I hope this gets you started down the right path on searches for more information to get the exact solution/recipe which works for you.

[–] 0x0 28 points 6 months ago (1 children)

I'd say nice alone is a good place to start, without delving into the scheduler rabbit hole...

[–] [email protected] 15 points 6 months ago

I would agree, and would bring awareness of ionice into the conversation for the readers - it can help control I/O priority to your block devices in the case of write-heavy workloads, possibly compiler artifacts etc.

[–] [email protected] 24 points 6 months ago (5 children)

"they never know what you intend to do"

I feel like if Linux wants to be a serious desktop OS contender, this needs to "just work" without having to look into all these custom solutions. If there is a desktop environment with windows and such, that obviously is intended to always stay responsive. Assuming no intentions makes more sense for a server environment.

[–] [email protected] 19 points 6 months ago (1 children)

Even for a server, the UI should always get priority, because when you gotta remote in, most likely shit's already going wrong.

[–] SirDimples 12 points 6 months ago (1 children)

Totally agree, I've been in the situation where a remote host is 100%-ing and when I want to ssh into it to figure out why and possibly fix it, I can't cause ssh is unresponsive! leaving only one way out of this, hard reboot and hope I didn't lose data.

This is a fundamental issue in Linux, it needs a scheduler from this century.

[–] [email protected] 3 points 6 months ago

You should look into IPMI console access, that's usually the real 'only way out of this'

SSH has a lot of complexity but it's still the happy path with a lot of dependencies that can get in your way- is it waiting to do a reverse dns lookup on your IP? Trying to read files like your auth key from a saturated or failing disk? syncing logs?

With that said i am surprised people are having responsiveness issues under full load, are you sure you weren't running out of memory and relying heavily on swapping?

[–] [email protected] 13 points 6 months ago

100% agree. Desktop should always be a strong priority for the cpu.

[–] [email protected] 6 points 6 months ago

One of my biggest frustrations with Linux. You are right. If I have something that works out of the box on windows but requires hours of research on Linux to get working correctly, it's not an incentive to learn the complexities of Linux, it's an incentive to ditch it. I'm a hobbyist when it comes to Linux but I also have work to do. I can't be constantly ducking around with the OS when I have things to build.

[–] [email protected] 2 points 6 months ago

I see what you mean but I feel like it's more on the distro mainters to set niceness and prioritize the UI while under load.

[–] [email protected] 0 points 6 months ago* (last edited 6 months ago) (1 children)

What do you even mean as serious contender? I've been using Linux for almost 15 years without an issue on CPU, and I've used it almost only on very modest machines. I feel we're not getting your whole story here.

On the other hand whenever I had to do something IO intensive on windows it would always crawl in these machines

[–] [email protected] 4 points 6 months ago

You are getting the whole story - not sure what it is you think is missing. But I mean a serious desktop contender has to take UX seriously and have things "just work" without any custom configuration or tweaking or hacking around. Currently when I compile on Windows my browser and other programs "just works" while on Linux, the other stuff is choppy and laggy.

[–] msage 5 points 6 months ago

Wasn't CFS replaced in 6.6 with EEDVF?

I have the 6.6 on my desktop, and I guess the compilations don't freeze my media anymore, though I have little experience with it as of now, need more testing.

[–] agilob 0 points 6 months ago* (last edited 6 months ago)

The Linux kernel uses the CPU default scheduler, CFS,

Linux 6.6 (which recently landed on Debian) changed the scheduled to EEVDF, which is pretty widely criticized for poor tuning. 100% busy which means the scheduler is doing good job. If the CPU was idle and compilation was slow, than we would look into task scheduling and scheduling of blocking operations.