He didn't get ripped in prison. He was always ripped but it was revealed to the viewer for the first time in prison.
noli
Did you truly read what I said? The only logical way I can frame your comment is that you glanced at what I wrote down and started writing a reply.
To a regular average windows user, ubuntu is incredibly complicated. When you learm how it works and how you're supposed to use it, it becomes incredibly easy. The "hard" part of ubuntu is the paradigm shift from windows to the linux ecosystem.
Similarly, to an average linux user nixos is "hard" because it does things completely differently from other linux distros. But once you're used to it, it just makes sense and is easy.
So the comparison is average windows user -> ubuntu vs average linux user -> nixos. Not average user -> ubuntu vs average user -> nixos.
Finally: Nixos documentation is IMO 100x better than ubuntu documentation. Whenever I experience any issue with ubuntu it's easier to just load up the arch wiki and hope it's similar than it is to try and find anything specific for ubuntu that isn't either 10 years out of date, a massive gaping security risk or just plain dumb. The nixos wiki may not be perfect but it has always been sufficient for my needs, and I have to run a decent amount of very niche pieces of software.
Flipping burgers is enough to pay for chemotherapy. Src: am european
It's incredibly complicated in the same way that ubuntu is incredibly complicated to a lifelong windows user.
It just requires a bit of a paradigm shift which includes a learning curve but IMO once you're past that point it's intuitive and even easier than other distros.
https://nixos.wiki/wiki/Linux_kernel
You can specify custom parts of the config that enables that module and/or extra module packages.
If you specify a custom part of the config then ye sure you'll be compiling the kernel on each kernel update but you don't need to manually configure it
The killer feature is declarative system management. Reproducible systems is just one of the resulting properties. You want to just try out KDE for a week coming from gnome? Good luck getting rid of all the bloat when switching back on arch. You want to run a program once but not necessarily have it installed on your system? You can do that with nixos. You messed something up and your system now doesn't boot? You can go back to a previous iteration with nixos, no need to find your liveUSB to start messing with chrooting and stuff. Ever find yourself asking where the configuration file for is so you can edit it? The answer is /etc/configuration.nix Ever had to merge older configs with newer ones because the software updated? (If no, you haven't been using arch for long) why would you need to do that? You declaratively specified how you want your system to behave and nixos will figure out how to translate that to the new config.
And that's just the "killer" features I use on a day to day basis
While I get your point, you're still slightly misguided.
Sometimes for a smaller dataset an algorithm with worse asymptomatic complexity can be faster.
Some examples:
- Radix sort's complexity is linear. Then why would most people still want to use e.g. quicksort? Because for relatively smaller datasets, the overhead of radix sort overpowers the gain from being asymptotically faster.
- One of the most common and well-known optimizations for quicksort is to switch over to insertion sort when subarray sizes become smaller than a certain size. This is because for small datasets (I'm talking e.g. 10 elements) insertion sort is just objectively faster.
Big O notation only considers the largest factor. It is still important to consider the lower order factors in some cases. Assume the theoretical time complexity for an algorithm A is 2nlog(n) + 999999999n and for algorithm B it is n^2 + 7n. Clearly with a small n, B will always be faster, even though B is O(n^2) and A is O(nlog(n)).
Sorting is actually a great example to show how you should always consider what your data looks like before deciding which algorithm to use, which is one of the biggest takeaways I had from my data structures & algorithms class.
This youtube channel also has a fairly nice three-part series on the topic of sorting algorithms: https://youtu.be/_KhZ7F-jOlI?si=7o0Ub7bn8Y9g1fDx
Oh yeah, it's actually pretty extensive and expressive. If you're interested in this sort of stuff it's worth checking out the IR language reference a bit. Apparently you can even specify the specific garbage collection strategy on a per-function basis if you want to. They do however specify the following: "Note that LLVM itself does not contain a garbage collector, this functionality is restricted to generating machine code which can interoperate with a collector provided externally" (source: https://llvm.org/docs/LangRef.html#garbage-collector-strategy-names )
If you're interested in this stuff it's definitely fun to work through a part of that language reference document. It's pretty approachable. After going through the first few chapters I had some fun writing some IR manually for some toy programs.
LLVM is designed in a very modular way and the LLVM IR allows you to specify e.g. if memory management should be manual/garbage collected.
You could make a frontend (design a language) for LLVM that exposes those options through some compiler directives.
In general I'd heavily recommend looking into LLVM's documentation.
Look at the profile picture.
Fun fact: lots of those sd cards are actually fake. Have you tried actually putting 128gb of data on it?
It's pretty easy for them to put a scummy firmware on it that reports having a significantly larger size than they actually have.