this post was submitted on 06 Jul 2024
97 points (94.5% liked)
Linux
5276 readers
580 users here now
A community for everything relating to the linux operating system
Also check out [email protected]
Original icon base courtesy of [email protected] and The GIMP
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm running Linux without swap for 20 years on my workstations and gaming PCs now. If you don't hybernate and have enough RAM swap is useless.
My memory doesn't need to be managed. I have 20GB in my current setup and it was never full. If anything gets swapped in this situation it means it needlessly slows me down.
I even mount tmpfs ramdisks for my shader cache dirs, because they get recreated every time anyways and why would i want anything temporary on disk, if i have 20 GB of RAM.
Not necessarily. Your memory also contains file backed pages (i.e. "file system cache"). These pages are typically not counted when determining "memory usage", because they can always be discarded.
It is often advantageous to keep frequently use files in cache in favor of unfrequently used memory pages.
so you think it's faster to keep cache for files on a disk, almost like where the files already are, instead of the 14 GB of actually free RAM that the "free" command shows? if that's your opinion, okay, but i don't agree at all. (btw. that command also shows cache and i think that's included.)
You are misunderstanding.
The file cache is never written out to the swapfile, because files are already on disk, like you say. The file cache is kept in memory and the kernel may decide it's more advantageous to swap out unused anonymous memory pages to disk than flushing a file from the cache. You can use the
vm.swappiness
parameter to finetune this behavior to your liking btw. Lower values favor keeping more anonymous memory pages in memory, higher values favor file backed pages.To give an extreme example of where this is useful: I have a use case where I process a number of large video files (each 2GiB-10Gib in size). The job I'm doing involves doing several passes over the same file sequentially. You can bet your ass that caching them in memory speeds things up dramatically: the first pass, where it has to read the file on disk is at 200x speed (relative to the video's duration), the second pass at 15000x speed.
Even in less extreme circumstances it also helps by keeping frequently accessed files in your home directory in memory, for example your browser profile. Your browser and desktop environment would be much more sluggish if it had to reach out to disk every time for every file it touched.
And you are free to disagree, but it's not my opinion but the opinion of kernel developers of just about every operating system built the past 4 decades. So I'd say: take up the argument with them and see how far you get.
oh, i see. i have never done anything like your example. i have converted lots of videos, but not where it would go back in the file. yeah, i can see how you would want to slow down everything else on the system by swapping it, to get your video processed a bit faster. it's just nothing i would do. however, if i wanted to, i could just truncate a file of 6GB, add it as swap and delete it afterwards.