Use new containers, that's what they're for.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
That would be ideal, per my understanding of the architecture.
So will docker then minimize the system footprint for me? If I run two mysql containers, it won't necessarily take twice the resources of a single mysql container? I'm seeing that the existing mysql process in top
is using 15% of my VPS's RAM, I don't want to spin up another one if it's going to scale linearly.
If I run two mysql containers, it won't necessarily take twice the resources of a single mysql containers
It's complicated, but essentially, no.
Docker images, are built in layers. Each layer is a step in the build process. Layers that are identical, are shared between containers to the point of it taking up the ram of only running the layer once.
Although, it should be noted that docker doesn't load the whole container into memory, like a normal linux os. Unused stuff will just sit on your disk, just like normal. So rather, binaries or libraries loaded twice via two docker containers will only use up the ram of one instance. This is similar to how shared libraries reduce ram usage.
Docker only has these features, deduplication, if you are using overlayfs or aufs, but I think overlayfs is the default.
Should you run more than one database container? Well I dunno how mysql scales. If there is performance benefit from having only one mysqld instance, then it's probably worth it. Like, if mysql uses up that much ram regardless of what databases you have loaded in a way that can't be deduplicated, then you'd definitely see a benefit from a single container.
What if your services need different database versions, or even software? Then different database containers is probably better.
Thank you for an excellent explanation and blogpost. I'm getting conflicting answers, even on this question, but most authoritative sources do backup what you're saying re:FS. I'm trying to wrap my head around how that works, specifically with heavy processes. I'm running on a VPS with 2 GiB of RAM and mysql
is using 15% of that.
At this point I have my primary container running. I guess I'll just have to try spinning up new ones and see how things scale.
What if your services need different database versions, or even software? Then different database containers is probably better.
This version-independence was what attracted me to docker in the first place, so if it doesn't work well this way then I may just replace the setup with a conventional setup and deal with dependency hell like I used to - pantsseat.gif.
it won't necessarily take twice the resources of a single mysql container
It will as far as runtime resources
You can (and should) just use the one MySQL container for all your applications. Set up a different database/schema for each container
I'm getting conflicting replies, so I'll try running separate containers (which was the point of going the docker way anyway - to avoid version dependency problems).
If it doesn't scale well I may just switch back to non-container hosting.
To elaborate a bit more, there is the MySQL resource usage and the docker overhead. If you run two containers that are the same, the docker overhead will only ding you once, but the actual MySQL process will consume its own CPU and memory inside each container.
So by running two containers you are going to be using an extra couple hundred MB of RAM (whatever MySQL's minimum memory footprint is)
AFAIK it won't and should you still get a bottleneck you can limit the maximum resources a service may use.
So from what i get reading your question, i would recommend reading more about container, compose files and how they work.
To your question, i assume when you are talking about adding to container you are actually referring to compose files (often called 'stacks')? Containers are basically almost no computational overhead.
I keep my services in extra compose files. Every service that needs a db gets a extra one. This helps to keep things simple and modular.
I need to upgrade a db from a service? -> i do just that and can leave everything else untouched.
Also, typically compose automatically creates a network where all the containing services of that stack communicate. Separating the compose files help to isolate them a little bit with the default settings.
Aren't containers the product of compose files? i.e. the compose files spin up containers. I understand the architecture, I'm just not sure about how docker streamlines separate containers running the same process (eg, mysql).
I'm getting some answers saying that it deduplicates, and others saying that it doesn't. It looks more likely that it's the former though.
A compose file is just the configuration of one or many containers. The container is downloaded from the chosen registry and pretty much does not get touched.
A compose file 'composes' multiple containers together. Thats where the name comes from.
When you run multiple databases then those run parallel. So every database has its own processes. You can even see them on the host system by running something like top or htop. The container images themself can get deduplicated that means that container images that contain the same layer just use the already downloaded files from that layer. A layer is nothing else as multiple files bundled. For example you can choose a 'ubuntu layer' for the base of your container image and every container that you want to download using that same layer will just simply use those files on creation time. But that basically does not matter. We are talking about a few 10th or 100th of MB in extreme cases.
But important, thoses files are just shared statically and changing a file in one container does not affect the other. Every container has its own isolated filesystem.
I understand the architecture, I'm just not sure about how docker streamlines separate containers running the same process (eg, mysql).
Quite simple actually. It gives every container its own environment thats to namespacing. Every process thinks (more or less) it is running on its own machine.
There are quite simple docker implementations with just a couple of hundreds lines of code.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
Fewer Letters | More Letters |
---|---|
HTTP | Hypertext Transfer Protocol, the Web |
VPS | Virtual Private Server (opposed to shared hosting) |
nginx | Popular HTTP server |
2 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #436 for this sub, first seen 18th Jan 2024, 10:55] [FAQ] [Full list] [Contact] [Source code]
Containers are very lightweight. I have no desire to build anything so I always just add another service container to my existing stacks.
That was my impression as well. But since I'm on a low-RAM VPS any overhead in RAM adds up, and I wanted to know how process deduplication works before I get into it.
This how I do it, not saying it's the best way, but serves me well :).
For each type of application, 1 docker-compose.yaml. This will have all linked containers in 1 file but all your different applications are seperate !
Every application in it's respective folder.
- home/user/docker/app1/docker-compose.yml
- home/user/docker/app2/docker-compose.yml
- home/user/docker/app3/docker-compose.yml
Everything is behind an application proxy (traefik in my case) and served with self-signed certificate.
I access all my apps through their domain name on my LAN with wireguard.
Yes this is what I want to do. My question is how docker manages shared processes between these apps (for example, if app1 uses mysql and app2 also uses mysql).
Does it take up the RAM of 2 mysql processes? It seems wasteful if that's the case, especially since I'm on a low-RAM VPS. I'm getting conflicting answers, so it looks like I'll have to try it out and see.
Nah, that's not how it works ! I have over 10 applications and half of them have databases, and that's the prime objective of containers ! Less resource intensive and easier to deploy on low end machines. If I had to deploy 10 VMs for my 10 applications, my computer would not be able to handle it !
I have no idea how it works underneath, that's a more technical question on how container engines work. But if you searx it or ask chatGPT (if you use this kind of tool) i'm sure you will find out how it works :).
This is promising, thanks!
I would suggest having an nginx as a reverse proxy (I prefer avoiding a container as it's easier to manage) and the have your services in whatever medium you prefer.
Yes, that's exactly what I'm doing now, I was only unsure about how to map the remaining services - in the same docker containers, or in new ones.
Separate. That's the whole point of containerisation! Otherwise you're just doing a regular deploy with extra steps
Thank you. Yes makes sense. I guess it's fairly obvious in hindsight.