this post was submitted on 09 Sep 2024
426 points (98.0% liked)

Science Memes

11021 readers
3129 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 

Turns out the status quo of Linux memory management somehow works pretty damn okay, nobody seems to really know why, and nobody cares.

top 30 comments
sorted by: hot top controversial new old
[–] [email protected] 60 points 2 months ago (1 children)

Looks like your CS degree is actually teaching you CS stuff.

If all you wanted to do is center divs for 50$/h or so, a 2 months bootcamp would've been more than sufficient.

[–] [email protected] 76 points 2 months ago (1 children)

Except that the degree I did this for was in electrical engineering :(

[–] [email protected] 23 points 2 months ago (1 children)
[–] [email protected] 10 points 2 months ago* (last edited 2 months ago) (3 children)

you just gave me a panic attack about trying to get ultima underworld II and Star Wars: TIE Fighter to run

[–] [email protected] 5 points 2 months ago (1 children)

You have 57 minutes left to work it out...

[–] [email protected] 5 points 2 months ago

not my bedtime!

[–] [email protected] 3 points 2 months ago

I was maintaining a custom autoexec.bat just for TIE Fighter.

[–] [email protected] 2 points 2 months ago (1 children)

For me I loved the challenge of squeezing out a few extra k of lower memory. My autoexec.bat had four hundred lines in it.

I miss those days honestly. There's really not much practical benefit to overclocking anymore, even broke college kid level devices come with at least 8 gigs of ram.

8.... gigs... of ram... and ALL of it treated like lower memory... Could you imagine that in the mid 90s? I'd be thinking star trek.

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago) (1 children)

I learned so much in those days about the outrageously absurd, efficiency of code and concatenation and stupid little things. Early days of coding and even scripting through these silly difficulties shaped us in ways we can’t even recognize now.

It was all about solving puzzles using primitive tools and incompatible systems just so we could play simple games. I’m reading articles now about how Gen Z doesn’t even know how to type, lol.

[–] [email protected] 1 points 2 months ago (1 children)

The curse of accessibility, if you make something so easy that anyone can use it, everyone will.

[–] [email protected] 1 points 2 months ago (1 children)
[–] [email protected] 1 points 2 months ago (1 children)

Well since we are talking about the internet, is that such a bad idea?

It's not called the 'web of lies' for nothing.

[–] [email protected] 1 points 2 months ago (1 children)

Says the guy proposing a form of logic on “the web of lies” lol

[–] [email protected] 1 points 2 months ago

... that literally makes no rhetorical sense.

Also I wasn't performing any logic, I asked a question and made a statement. There was no evaluation of anything in there. Are you a misconfigured bot?

[–] [email protected] 15 points 2 months ago

you had me at P!=NP

[–] [email protected] 13 points 2 months ago

I use/admin Linux each and every day at a professional level and at least once a week I'm final panel doggo.

[–] [email protected] 10 points 2 months ago (1 children)

I feel this. Fell into a similar rabbit hole when I tried to get realtime feedback on the program's own memory usage, discerning stuff like reserved and actually used virtual memory. Felt like black magic and was ultimately not doable within the expected time constraints without touching the kernel I suppose. Spent too much time on that and had to move on with no other solution than to measure/compute the allocated memory of the largest payload data types.

[–] [email protected] 3 points 2 months ago

I've had to explain so many times how text pages work with copy on write semantics.

[–] [email protected] 10 points 2 months ago (2 children)

is it a common ocurrence on Linux that you have to constantly mess with the settings and end up in an obscure rabbithole? that's why I haven't given it a go.

[–] Gobbel2000 18 points 2 months ago

No, you absolutely don't need to care at all about the memory management when using Linux. This rabbit hole is really only relevant when you want to work on the Linux kernel or do some really low-level programming.

I would say the most obscure thing that is useful to know for running Linux is drive partitioning, but modern installers give you a lot of handrails in this process.

[–] [email protected] 7 points 2 months ago

No, not really. This is from the perspective of a developer/engineer, not an end user. I spent 6 months trying to make $product from $company both cheaper and more robust.

In car terms, you don't have to optimize or even be aware of the injection timings just to drive your car around.

Æcktshually, Windows or any other OS would have similar issues, because the underlying computer science problems are probably practically impossible to solve in an optimal way.

[–] [email protected] 9 points 2 months ago (1 children)

what does aligning peripheral and CPU pages mean?

[–] [email protected] 5 points 2 months ago* (last edited 2 months ago)

It's been a few years, but I'll try to remember.

Usually (*), your CPU can address pages (chunks of memory that are assigned to a program) in 4KiB steps. So when it does memory management (shuffle memory pages around, delete them, compress them, swap them to disk...), it does so in chunks of 4KiB. Now, let's say you have a GPU that needs to store data in the memory and sometimes exchange it with the CPU. But the designers knew that it will almost always use huge textures, so they simplified their design and made it able to only access memory in 2MiB chunks. Now each time the CPU manages a chunk of memory for the GPU, it needs to take care that it always lands on a multiple of 2MiB.

If you take fragmentation into account, this leads to all kinds of funny issues. You can get gaps in you memory, because you need to "skip ahead" to the next 2MiB border, or you have a free memory area that is large enough, but does not align to 2MiB...

And it gets even funnier if you have several different devices that have several different alignment requirements. Just one of those neat real-life quirks that can make your nice, clean, theoretical results invalid.

(*): and then there are huge pages, but that is a different can of worms

[–] [email protected] 8 points 2 months ago (1 children)

Does anyone have this meme template

[–] [email protected] 28 points 2 months ago* (last edited 2 months ago) (1 children)

Edit: ~~Wait... that seems to be a screenshot?!~~

Found the uncropped version

[–] [email protected] 6 points 2 months ago

Whoa thanks kind stranger.

[–] [email protected] 8 points 2 months ago (1 children)
[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

And Linus says, OOM is user's problem.

So I'll just sit back and relax.