this post was submitted on 26 Mar 2024
637 points (96.6% liked)
linuxmemes
20880 readers
2 users here now
I use Arch btw
Sister communities:
- LemmyMemes: Memes
- LemmyShitpost: Anything and everything goes.
- RISA: Star Trek memes and shitposts
Community rules
- Follow the site-wide rules and code of conduct
- Be civil
- Post Linux-related content
- No recent reposts
Please report posts and comments that break these rules!
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Theoretically a load average could be as high as it likes, it's essentially just the length of the task queue, after all.
Processes having to queue to get executed is no problem at all for lots of workloads. If you're not running anything latency-sensitive, a huge load average isn't a problem.
Also it's not really a matter of parallelization. Like I mentioned, ffmpeg impacted other processes even when restricted to running in a single thread.
That's because most other processes will do work in small chunks that complete within nanoseconds. Send a network request, parse some data, decode an image, poll HID device, etc.
A transcode meanwhile can easily have a CPU running full tilt for well over a second, working on just that one thing. Most processes will show up and go "I need X amount of CPU time" while ffmpeg will show up and go "give me all available CPU time" which is something the scheduler can't actually quantify.
It's like if someone showed up at a buffet and asked for all the food that no-one else is going to eat. How do you determine exactly how much that is, and thereby how much it is safe to give this person without giving away food someone else might've needed?
You don't. Without CPU headroom it becomes very difficult for the task scheduler to maintain low system latency. It'll do a pretty good job, but inevitably some CPU time that should have gone to other stuff, will go the process asking for as much as it can get.