this post was submitted on 15 Oct 2023
71 points (97.3% liked)

Selfhosted

39435 readers
8 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

For the last two years, I've been treating compose files as individual runners for individual programs.

Then I brainstormed the concept of having one singular docker-compose file that writes out every single running container on my system... (that can use compose), each install starts at the same root directory and volumes branch out from there.

Then I find out, this is how most people use compose. One compose file, with volumes and directories branching out from wherever ./ is called.

THEN I FIND OUT... that most people that discover this move their installations to podman because compose works on different versions per app and calling those versions breaks the concept of having one singular docker-compose.yml file and podman doesn't need a version for compose files.

Is there some meta for the best way to handle these apps collectively?

top 40 comments
sorted by: hot top controversial new old
[–] [email protected] 37 points 1 year ago (2 children)

Multiple compose file, each in their own directory for a stack of services. Running Lemmy? It goes to ~/compose_home/lemmy, with binds for image resized and database as folders inside that directory. Running website? It goes to ~/compose_home/example.com, with its static files, api, and database binds all as folders inside that. Etc etc. Use gateway reverse proxy (I prefer Traefik but each to their own) and have each stack join the network to expose only what you’d need.

Back up is easy, snapshot the volume bind (stop any service individually as needed); moving server for specific stack is easy, just move the directory over to a new system (update gateway info if required); upgrading is easy, just upgrade individual stack and off to the races.

Pulling all stacks into a single compose for the system as a whole is nuts. You lose all the flexibility and gain… nothing?

[–] [email protected] 7 points 1 year ago (1 children)

This. And I recently found out you can also use includes in compose v2.20+, so if your stack complexity demands it, you can have a small top-level docker-compose.yml with includes to smaller compose files, per service or any other criteria you want.

https://docs.docker.com/compose/multiple-compose-files/include/

[–] [email protected] 1 points 1 year ago

I prefer compose merge because my "downstream" services can propagate their depends/networks to things that depend on them up the stream

There's an env variables you set in .env so it's similar to include

The one thing I prefer about include is that each include directory can have its own .env file, which merges with the first level .env. With merge it seems you're stuck with one .env file for all in-file substitutes

[–] [email protected] 3 points 1 year ago

That's what I do. I always thought I was doing it "wrong" but it just made sense to me. I can also just up/down/etc... compose files to individually pull new images, test things, disable a service, and apply config updates without affecting another container at all.

I even keep my docker config files in a seperate directory so I can backup the docker composes in a second over the network.

I started by using a single mariaDB instance with multiple databases, but now I see the same benefits from moving to one database container per compose file that needs it to make it even more flexible so I don't need to start up mariadb and redis before all of my containers.

File permission problems? Down the compose that needs it, fix it, re-up it without losing any uptime for other services and never having to use docker commands kludged together.

[–] [email protected] 30 points 1 year ago (2 children)

I think compose is best used somewhere in between.

I like to have separate compose files for all my service "stacks". Sometimes that's a frontend, backend, and database. Other times it's just a single container.

It's all about how you want to organize things.

[–] [email protected] 13 points 1 year ago

I do this, 1 compose file per application. That has all the things that application need, volumes, networks, secrets.

In single docker host land, each application even has its own folder with the compose file and any other artifacts in it.

[–] [email protected] 3 points 1 year ago

Yeah this post had me a little worried I’m doing something wrong haha. But I do it just like that. Compose file per stack.

[–] [email protected] 16 points 1 year ago (2 children)

I've always heard the opposite advice - don't put all your containers in one compose file. If you have to update an image for one app, wouldn't you have to restart the entirety of your apps?

[–] [email protected] 5 points 1 year ago

If by app you mean container, no. You pull the latest image and rerun docker compose. It will make only the necessary changes, in this case restarting the container to update.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

You can reference a single or multiple containers in a compose stack.

docker compose -f /path/to/compose.yml restart NameOfServiceInCompose

[–] [email protected] 1 points 1 year ago

whoa, I never knew that. Great tip!

[–] [email protected] 12 points 1 year ago

I use multiple compose files for simplicity

[–] [email protected] 10 points 1 year ago (1 children)

I moved from compose to using Ansible to deploy containers. The Ansible container config looks almost identical to a compose file but I can also create folders, config files, set permissions, etc.

[–] [email protected] 5 points 1 year ago (1 children)

Can you give an example playbook?

[–] [email protected] 6 points 1 year ago

Sure. Below is an example playbook that is fairly similar to how I'm deploying most of my containers.

This example creates a folder for samba data, creates a config file from a template and then runs the samba container. It even has a handler so that if I make changes to the config file template it will cycle the container for me after deploying the updated config file.

I usually structure everything as an ansible role which just splits up this sort of playbook into a folder structure instead. ChatGPT did a great job of helping me figure out where to put files and generally just sped up the process of me creating tasks to do common things like setup a cronjob, install a package, or copy files around.

- name: Run samba
  hosts: servername

  vars:
    samba_data_directory: "/home/me/docker/samba"

  tasks:
  - name: Create samba data directory
    ansible.builtin.file:
      path: "{{ samba_data_directory }}"
      state: directory
      mode: '0755'

  - name: Create samba config from a jinja template file
    ansible.builtin.template:
      src: templates/smb.conf.j2
      dest: "{{ samba_data_directory }}/smb.conf"
      mode: '0644'
    notify: Restart samba container

  - name: Run samba container
    community.docker.docker_container:
      name: samba
      image: dperson/samba
      ports:
        - 445:445
      volumes:
        - "{{ samba_data_directory}}:/etc/samba/"
        - "/home/me/samba_share:/samba_share"
      env:
        TZ: "America/Chicago"
        UID: '1000'
        GUID: '1000'
        USER: "me;mysambapassword"
        WORKERGROUP: "my-samba-workergroup"
      restart_policy: unless-stopped

  handlers:
  - name: Restart samba container
    community.docker.docker_container:
      name: samba
      restart: true

[–] [email protected] 10 points 1 year ago (1 children)

As other have said, I have a root docker directory then have directories inside for all my stacks, like Plex. Then I run this script which loops through them all to update everything in one command.

for n in plex-system bitwarden freshrss changedetection.io heimdall invidious paperless pihole transmission dashdot
do
    cd /docker/$n
    docker-compose pull
    docker-compose up -d
done

echo Removing old docker images...
docker image prune -f
[–] [email protected] 17 points 1 year ago (3 children)

Or just use the Watchtower container to auto-update them 😉

[–] [email protected] 7 points 1 year ago (2 children)

I don’t like the auto update function. I also use a script similar to the one op uses (with a .ignore file added). I like to be in control when (or if) updates happen. I use watchtower as a notification service.

[–] [email protected] 1 points 1 year ago (1 children)

Exactly, when it updates, I want to initiate it to make sure everything goes as it should.

[–] [email protected] 1 points 1 year ago

Nothing off mine is that important that I couldn't create/rollback the container if it does happen to screw up.

[–] [email protected] 1 points 1 year ago

I scream test myself… kidding aside, I try to pin to major versions where possible — Postgres:16-alpine for example will generally not break between updates and things should just chip along. It’s when indie devs not tagging anything other than latest or adhere to semantic versioning best practices where I keep watchtower off and update once in a blue moon manually as a result.

[–] d13 1 points 1 year ago

I prefer manually updating so that I can sanity-test for breaking changes.

I have a script like the one above but I don't loop through the services; I just run it for each service and then test it. I also only have it delete versions of a certain age.

[–] Zikeji 1 points 1 year ago

I use Diun to notify me when an image is updated. I also use strict versions in my compose file, that way if I have to restore to another system I don't soft brick a container due to a breaking version change.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LXC Linux Containers
NAT Network Address Translation
Plex Brand of media server package
VPS Virtual Private Server (opposed to shared hosting)

[Thread #217 for this sub, first seen 15th Oct 2023, 20:15] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 3 points 1 year ago (2 children)

The best way is to use Podman's Systemd integration.

[–] [email protected] 2 points 1 year ago (2 children)

doesn't systemd come with it's own container thingy?

[–] [email protected] 3 points 1 year ago

You're probably thinking about systemd-nspawn. Technically yes they're containers, but not the same flavour of them. It's more like LXC than Docker: it runs init and starts a full distro, like a VM but as a container.

[–] [email protected] 0 points 1 year ago

Nope, but it integrates very well with Podman.

[–] [email protected] 2 points 1 year ago (3 children)

This is what I use whenever I make my own services or am using a simple service with only one container. But I have yet to figure out how to convert a more complicated service like lemmy that already uses docker-compose, so I just use podman-docker and emulate docker-compose with podman. But that doesn't get me any of the benefits of systemd and now my podman has a daemon, which defeats one of the main purposes of podman.

[–] [email protected] 4 points 1 year ago (1 children)

Just forget about podman-compose and use simple Quadlet container files with Systemd. That way it is not all in the same file, but Systemd handles all the inter-relations between the containers just fine.

Alternatively Podman also supports kubernetes configuration files, which is probably closer to what you have in mind, but I never tried that myself as the above is much simpler and better integrated with existing Systemd service files.

[–] [email protected] 1 points 1 year ago (1 children)

Quadlet

Requires podman 4.4 though

[–] [email protected] 1 points 1 year ago (1 children)

No, from that version on, it is integrated in Podman, but it was available for earlier versions as a 3rd party extension as well.

But if you are not yet on Podman 4.4 or later you should really upgrade soon, that version is quite old already.

[–] [email protected] 1 points 1 year ago

you should really upgrade soon

Debian stable has podman 4.3 and 4.4 is not in stable-backports

[–] [email protected] 2 points 1 year ago (1 children)

You can use podman pods and generate the systemd file for the whole pod.

[–] [email protected] 1 points 1 year ago

But how do I convert the docker-compose file to a pod definition? If I have to do it manually, that's a pass because I don't want to do it again if lemmy updates and significantly changes it's docker-compose file, which it did when 0.18.0 came out.

[–] [email protected] 2 points 1 year ago (1 children)

Podman with systemd works better if you just do your podman run command with all the variables and stuff and then run podman generate systemd.

Podman compose feels like a band aid for people coming from docker compose. If you run podman compose and then do podman generate systemd, it will just make a systemd unit that starts podman compose. In my experience having all of the config stuff in the actual systemd unit file makes your life easier in the long run. Fewer config files the better I say.

[–] [email protected] 3 points 1 year ago

It's even simpler now that Quadlet is integrated in Podman 4.x or later.

[–] [email protected] 0 points 1 year ago

Have you tried portainer?

[–] [email protected] -1 points 1 year ago

you can always add Makefile to traverse directories.

[–] [email protected] -2 points 1 year ago

I'm currently using YunoHost behind CG-NAT with a Wireguard VPS bypass, but plan on moving to a Dockerized setup soon because of YNH still using an outdated version of Debian. What do you recommend me to keep my setup as similar to YNH?