this post was submitted on 21 Jul 2023
87 points (93.1% liked)
Programming
17488 readers
106 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities [email protected]
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I wish he had written why he's so anti-container/docker. That's a pretty unusual stance I haven't been exposed to yet.
Hi!
First I'd like to clarify that I'm not "anti-container/Docker". 😅
There is a lot of discussion on this article (with my comments!) going on over at Tildes. I don't wanna copy-paste everything from there, but I'll share the first main response I gave to someone who had very similar feedback to kick-start some discussion on those points here as well:
Some high level points on the "why":
Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it's nice not to have to worry about a
docker build
command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the nextCost: Docker licenses for most companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn't guarantee reproducibility and has poor performance to boot (see below)
Performance: Docker performance on macOS (and Windows), especially storage mount performance remains poor; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default
I think it's also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don't really apply to the latter.
What makes you say that?
My team relies on Docker because it is reproducible…
You might be interested in this article that compares nix and docker. It explains why docker builds are not considered reproducible:
and why nix builds are reproducible a lot of the time:
Containerization has other advantages though (security) and you can actually use nix's reproducible builds in combination with (docker) containers.
That seems like an argument for maintaining a frozen repo of packages, not against containers. You can only have a truly fully-reproducible build environment if you setup your toolchain to keep copies of every piece of external software so that you can do hermetic builds.
I think this is a misguided way to workaround proper toolchain setup. Nix is pretty cool though.
I am not arguing against containers, I am arguing that nix is more reproducible. Containers can be used with nix and are useful in other ways.
This is essentially what nix does. In addition it verifies that the packages are identical to the packages specified in your flake.nix file.
This is essentially what Nix does, except Nix verifies the external software is the same with checksums. It also does hermetic builds.
Nix is indeed cool. I just see it as less practical than maintaining a toolchain for devs to use. Seems like reinventing the wheel, instead of airing-up the tires. I could well be absolutely wrong there - my experience is mainly enterprise software and not every process or tool there is used because it is the best one.
There are definately some things preventing Nix adoption. What are the reasons you see it as less practical than the alternatives?
What are alternative ways of maintaining a toolchain that achieves the same thing?
I see it as less practical mainly due to the extant tooling and age/maturity of the project.
The ways that I'm most familiar with are use of software like Artifactory - basically a multi-repo. Using such a tool, any package or artifact can be readily retained for future use. Then, for builds, one only needs to ensure that it is used as the package source, regardless of type (PyPy, Docker image, binary, RPM, etc).
Alternatively, one can use individual repos for any relevant package type but that's a bit more overhead to manage.
@nickwitha_k @uthredii I’d like to think a better analogy would be that nix is like using a 3D model of a wheel instead of a compass and a straightedge to make wheels hehe 🙃
I quite like the sound of Nix, every time I touch on it but haven't really dug in yet. You're making me really want to though.
I’ll certainly give this a read!
Are you saying that nix will cache all the dependencies within itself/its “container,” or whatever its container replacement would be called?
Yep, sort of.
It saves each version of your dependencies to the /nix/store folder with a checksum prefixing the program name. For example you might have the following Firefox programs
Because of this you can largely avoid dependency conflicts. For example a program A could depend on
/nix/store/cm1bdi4hp8g8ic5jxqjhzmm7gl3a6c46-firefox-108.0.1
and a program B could depend on/nix/store/rfr0n62z21ymi0ljj04qw2d7fgy2ckrq-firefox-114.0.1
and both programs would work as both have dependencies satisfied. AFAIK using other build systems you would have to break program A or program B (or find versions of program A and program B where both dependencies are satisfied).