Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I was like you and avoided it for a long time. Dedicated use, lean VMs for each thing I was running. I decided to learn it, mostly out of curiosity and I'll be honest, I like the convenience of it a lot. They're easier to deploy and tend to have lower overhead than a single purpose VM running the same software.
Around the same time I switched my VM server over to Proxmox and learned about LxC containers. Those are also pretty nifty and a nice middle ground between full VM and docker container.
Currently I have a mixed environment because I like to use my homelab to learn, but most new stuff I deploy tends to go in this order: Docker > LxC > full VM.
Why wouldn't you want to use containers? I'm curious. What do you use now? Ansible? Puppet? Chef?
Currently no virtualisation at all - just my OS on bare metal with some apps installed. Remember, this is a single machine sitting in my basement running Samba and a couple of other things - there's not much to orchestrate :-)
Oh, I thought you had multiple machines.
I use docker because each service I use requires different libraries with different versions. With containers, that doesn't matter. It also provides some rudimentary security. If an attacker gets in, they'll have to break out of the container first to get at the rest of the system. Each container can run with a different user, so even if they do get out of the container, at worst they'll be able to destroy the data they have access to - well, they'll still see other stuff in the network, but I think it's better than being straight pwned.
It's quite easy to use once you get the hang of it. In most situations, it's the prefered option because you can just have your docker container, choose where relevant files are allowing you to properly isolate your applications. Or on single purpose servers, it makes deployment of applications and maintaining dependencies significantly easier.
At the very least, it's a great tool to add to your toolbox to use as needed.
I am running all my software services with docker. It's stupid simple to manage and I have all of my running services in one paradigm.
Learning docker is always a big plus. It's not hard. If you are comfortable with cli commands, then it should be a breeze. Even if you are not comfortable, you should get used to it very fast.
Yes. Let me give you an example on why it is very nice: I migrated one of my machines at home from an old x86-64 laptop to an arm64 odroid this week. I had a couple of applications running, 8 or 9 of them, all organized in a docker compose file with all persistent storage volumes mapped to plain folders in a directory. All I had to do was stop the compose setup, copy the folder structure, install docker on the new machine and start the compose setup. There was one minor hickup since I forgot that one of the containers was built locally but since all the other software has arm64 images available under the same name, it just worked. Changed the host IP and done.
One of the very nice things is the portability of containers, as well as the reproducibility (within limits) of the applications, since you divide them into stateless parts (the container) and stateful parts (the volumes), definitely give it a go!
Docker is nice for things that have complex installations and I want a very specific implementation that I don't plan to tweak very much. Otherwise, it's more hassle than it's worth. There are lots of networking issues like limited/experimental support for IPv6, and too much is hidden and preconfigured, making it difficult to make adjustments that would otherwise just be a config file change.
So it is good for products like a mail server where you want to use the exact software they use like let's say postfix + dovecot + roundcube + nginix + acme + MySQL + spam assassin + amavisd, etc. But you want to use an existing reverse proxy and cert it setup, or want to use a different spam filter or database and it becomes a huge hassle.
If you have homelab and not using containers - you are missing out A LOT! Docker-compose is beautiful thing for homelab. <3
It's convenient. Can't hurt to get used to it, for sure, in that it's useful to not have to go through dependency hell installing things sometimes. It's based on kernel features I don't see Linus pulling out, so I think you'll only see it more.
As someone who runs nix-only at home, I mostly use its underlying tech in the form of snaps/flatpaks, though. I use docker itself at work constantly, but at home, snaps/flatpaks tend to do the "minimize thinking about dependencies and building" bit but in a workflow more convenient for desktop applications.
As someone who is not a former sysadmin and only vaguely familiar with *nix, I’ve been able to turn my home NAS (bought strictly to hold photos and videos backed up from our phones) into a home media sever by installing Docker, learning how the yml files work, how containers network, etc, and it’s been awesome.
Yeh, I'm not a system admin in any meaning of the word, but docker is so simple that even I got around to figuring it out and to me it just exists to save time and prevent headaches (dependency hell)
I think it's a good tool to have on your toolbelt, so it can't hurt to look into it.
Whether you will like it or not, and whether you should move your existing stuff to it is another matter. I know us old Unix folk can be a fussy bunch about new fads (I started as a Unix admin in the late 90s myself).
Personally, I find docker a useful tool for a lot of things, but I also know when to leave the tool in the box.
Definitely not a fad. It's used all over the industry. It gives you a lot more control over the environment where your hosted apps run. There may be some overhead, but it's worth it.
Try other container technologies lie LXC or go right side and play with FreeBSD jails. Quality of dockers you can find around is horrendous, giving that Docker itself build for convenience not security. It is not something I will trust.
There's nothing wrong with OCI Images. If you're concerned about the security of Docker (which, imo, you should be) there are other container runtimes that don't have its security tradeoffs (e.g. podman).
Some people seem to hate on it, but I love Docker, it works well for what it has to do and has relatively low overhead as far as I can tell. I personally virtualize a Debian server on Proxmox for my containers just so as to keep everything even more compartmentalized, but it takes more work than it's worth to set up.
And if you don't like Docker for whatever reason, you can also try Podman which is API compatible with Docker for the most part.