this post was submitted on 23 Jun 2024
278 points (95.1% liked)
Technology
58303 readers
9 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
32 bit CPU’s having difficulty accessing greater than 4gb of memory was exclusively a windows problem.
You still had a 4GB memory limit for processes, as well as a total memory limit of 64GB. Especially the first one was a problem for Java apps before AMD introduced 64bit extensions and a reason to use Sun servers for that.
Yeah I acknowledged the shortcomings in a different comment.
It was a duct take solution for sure.
Your other posts didn't reply to your claim that it is a Windows only problem. Linux did and some distros (Raspberry Pi) have the same limitations as Windows 95.
32 bit Windows XP got PAE in 2001, two years after Linux. 64 bit Windows came out in 2005.
I’m not overly worried about a few random Linux distros that did strange things, nor raspberry pi’s. I mean I don’t know why you’d use 32 bit on an 8gb pi anyways, so it shouldn’t affect anyone unless they did something REALLY strange.
For the average user, neither of those scenarios mattered, especially back when the problem was at its peak.
2 years was a long time to wait to use the extra memory that Linux could use out of the box.
I honestly don’t even remember XP having PAE, but if you NEED the validation, sure, Microsoft EVENTUALLY got it.
Except that Microsoft removed it in SP2 LOL!
And all the home use versions of XP still maxed out at 4gb.
There could see the memory but couldn’t use it, oh I’d forgotten that!
Wikipedia was a fun read.
For 8 years, Linux had the same limitations as Windows. Then for 2 years it was ahead. Pae could always be turned back on with a boot switch. Going back 25 years to criticize Windows is kind of weird but you do you.
(I run Linux on a variety of PCs, SBC's, and VM's in my house. I just get annoyed by unjustified Linux fanboyism.)
Not just for 2 years, XP removed it in sp2.
And even when it supported it, many versions wouldn’t let you use it, or would let you “see” it but not use it.
For basically the life of XP.
And as I said, it could still be enabled with a boot switch.
It's not like all distros in 1999 had PAE enabled by default. You had to find a pae enabled kernel.
And Linux PAE has been buggy off and on for 20 years:
"It worked for a while, but the problem came back in 2022. "
https://flaterco.com/kb/PAE_slowdown.html
Interesting! Do you have a link to a write up about this? I don’t know anything about the windows memory manager
Only slightly related, but here's the compiler flag to disable an arbitrary 2GB limit on x86 programs.
Finding the reason for its existence from a credible source isn't as easy, however. If you're fine with an explanation from StackOverflow, you can infer that it's there because some programs treat pointers as signed integers and die horribly when anything above 7FFFFFFF gets returned by the allocator.
It's a silly flag to use as it only works when running 32-bit Windows applications on 64-bit Windows, and if you're compiling from source, you should also have the option to just build a 64-bit binary in the first place. It made a degree of sense years ago when people actually used 32-bit Windows sometimes (which was usually just down to OEMs installing the wrong version on prebuilt PCs could have supported 64-bit) if you really wanted to only have one binary or you consumed a precompiled third party library and had to match its architecture.
You can also toggle it on precompiled binaries with the right tool (or a hex editor if you're insane), which was my main use case. Lots of old games that never got 64-bit releases that benefit from having access to the extra RAM, especially if you're modding them. It's a great way to avoid out of memory crashes.
Intel PAE if the answer, but it still came with other issues, so 64 was still the better answer.
Also the entire article comes down to simple math.
Bits is the number of digits.
So like a 4 digit number maxes out at 9999 but an 8 digit number maxes out at 99 999 999
So when you double the number of digits, the max size available is exponential. 10^4 bigger in this case. It just sounds small because you’re showing that the exponent doubles.
10^4 is WAY smaller than 10^8
It was actually 3gb because operating systems have to reserve parts of the memory address space for other things. It's more difficult for all 32bit operating systems to address above 4gb just most implemented additional complexity much earlier because Linux runs on large servers and stuff. Windows actually had a way to switch over to support it in some versions too. Probably the NT kernels that where also running on servers.
A quick skim of the Wikipedia seems like a good starting point for understanding the old problem.
https://en.m.wikipedia.org/wiki/3_GB_barrier
Wow they just…disabled all RAM over 3 GB because some drivers had hard coded some mapped memory? Jfc
Only on consumer Windows.
Windows Server never had the problem. But wouldn't allow Creative Labs drivers to be installed either...
https://en.wikipedia.org/wiki/Physical_Address_Extension
I'm not sure what you are talking about. Linux got PAE in 1999. Windows XP got PAE in 2001.
Not really, Raspberry Pi had that same issue with its 32 bit distros.