this post was submitted on 09 Jul 2023
58 points (100.0% liked)

Selfhosted

39435 readers
4 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

Hi,

I'm using docker-compose to host all my server services (jellyfin, qbittorrent, sonarr, etc.). I've recently grouped some of them into individual categories and then merged the individual docker-compose.yml file I had for each service into one per category. But is there actually any reason for not keeping them together?

The reason why is I've started configuring homepage and thought to myself "wouldn't it be cool if instead of giving the server IP each time (per configured service in homepage) I'd just use the service name?" (AFAIK this only works if the containers are all in the same file).

top 36 comments
sorted by: hot top controversial new old
[–] [email protected] 38 points 1 year ago (7 children)

I have a folder that all my docker services are in. Inside the folder is a folder for each discrete service and within that folder is a unique compose file necessary to run the service. Also in the folder is all the storage folders for that service so it's completely portable, move the folder to any server and run it and you're golden. I shut down all the services with a script then I can just tar the whole docker folder and every service and its data is backed up and portable.

[–] [email protected] 6 points 1 year ago

This is exactly what I do and could not be happier!

[–] [email protected] 6 points 1 year ago

In case anyone cares here is my script, I use this for backups or shutting down the server.

#!/bin/bash

logger "Stopping Docker compose services"

services=(/home/user/docker/*)    # This creates an array of the full paths to all subdirs
#the last entry in this array is always blank line, hence the minus 1 in the for loop count below

for ((i=0; i<=(${#services[@]}-1); i++))
do
    docker compose -f ${services[i]}/docker-compose.yml down &
done

#wait for all the background commands to finish
wait 
[–] [email protected] 4 points 1 year ago

Exactly my setup and for exactly the reasons you mentioned

[–] YonatanAvhar 2 points 1 year ago (1 children)

Exactly what I do except my master folder is ~

[–] [email protected] 2 points 1 year ago

I do ~/docker so I also have a docker-prototype folder for my sandbox/messing around with non-production stuff and I have a third folder for retired docker services so I keep the recipe and data in case I go back.

[–] [email protected] 1 points 1 year ago (1 children)
[–] [email protected] 1 points 1 year ago

To answer my own question, yes, yes it does. Should've done this ages ago...

[–] [email protected] 1 points 1 year ago

@czardestructo I like the tidiness of this.

[–] [email protected] 37 points 1 year ago (1 children)

For simplicity sake alone I would say No. As long as services don't share infrastructure (eg. a database) you shouldn't mix them so you have an easier time updating your scripts.

Another point is handling stacks. When you create dockers via compose you are not supposed to touch them individually. Collecting them all, or even just in categories, muddies that concept, since you have unrelated services grouped in a single stack and would need to update/up/down/... them all even if you just needed that for a single one.

Lastly networks. Usually you'd add networks to your stacks to isolate their respective Backend into closed networks, with only the exposing container (eg. a web frontend) being in the publicly available network to increase security and avoid side effects.

[–] [email protected] 8 points 1 year ago (2 children)

So right now I have a single compose file with a file structure like this:

docker/
├─ compose/
│  ├─ docker-compose.yml
├─ config/
│  ├─ service1/
│  ├─ service2/

Would you in that case use a structure like the following?

docker/
├─ service1/
│  ├─ config/
│  ├─ docker-compose.yml
├─ service2/
│  ├─ config/
│  ├─ docker-compose.yml

Or a different folder structure?

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago)

The second one is exactly what I have. One folder for each service containing it's compose file and all persistent data belonging to that stack(unless it's something like your media files)

[–] einsteinx2 2 points 1 year ago

The second is exactly how I do it. Keeps everything separate so easy to move individual services to another host if needed. Easy to restart a single service without taking them all down. Keeps everything neat and organized (IMO).

[–] [email protected] 15 points 1 year ago* (last edited 1 year ago)

No, keep them ungrouped, migration to a new server is much easier, otherwise you need to migrate everything everywhere all at once

You can have the same effect (connect to the named container) if you create a docker network and place everything on the same network

[–] [email protected] 12 points 1 year ago

No, no you should not. I haven't used homepage but you probably just need to attach the services to the same network or just map the ports on the host and just use the host IP.

[–] troy 10 points 1 year ago

Probably want to keep services with different life cycles in separate docker compose files to allow you to shutdown/restart/reconfigure then separately. If containers depend on each other, then combining into compose file makes sense.

That said, experimenting is part of the fun, nothing wrong with testing it out and seeing if you like it.

[–] [email protected] 8 points 1 year ago (2 children)

I would not. Create an external network and just add those to the compose files.

[–] [email protected] 4 points 1 year ago (2 children)

Bingo. Or just bite the bullet and dive into Kubernetes

[–] [email protected] 3 points 1 year ago

Overkill for home use

[–] [email protected] 1 points 1 year ago

Back when I used to use Docker this is what I was doing. If you use a reverse proxy that is Docker-aware (eg Traefik), it can still connect to the services by name and expose them out as subdomains or subpaths based on the names.

But I graduated to Kubernetes a long time ago.

[–] [email protected] 0 points 1 year ago (1 children)

@midas @bronzing

Out of curiosity why not? this is what I have been doing forever.

[–] [email protected] 2 points 1 year ago

So there's a million ways to do things and what works for you works for you. For me, putting all services ina single compose file only has downsides.

  • Difficult to search, I guess searching for a name and then just editing the file works - but doesn't it become a mess fairly quickly? I sometimes struggle with just a regular yaml file lmao
  • ^also missing an overview of what you're exactly running - docker ps -a or ctop or whatever works - but with an ls -la I clearly see whats on my system
  • How do you update containers? I use a 'docker compose pull' to update the images which are tagged with latest.
  • I use volume mounts for config and data. A config dir inside the container is mounted like './config:/app/data/config' - works pretty neatly if each compose file is in its own named directory
  • Turning services on/off is just going into the directory and saying docker compose up -d / or down - in a huge compose file don't you have to comment the service out or something?
[–] [email protected] 5 points 1 year ago

You can use an external network if you wish to refer to them all by a name. Just make sure all the containers you wish to refer to are in it.

[–] [email protected] 4 points 1 year ago

A compose file is meant for different components of a single service but you're allowed to experiment with whatever you want

[–] [email protected] 4 points 1 year ago

I personally don't. It is just messier. I only group things that belong together, like a webserver+database, torrentclient+vpn and so on.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

I'll be the opposite of everyone I guess I have all my services in one compose file. Never had an issue with it. Why I have no exposed ports and everything is accessed through a reverse proxy, and the big one it's easy to just go docker compose and have them all come up or down.

[–] [email protected] 1 points 1 year ago

Same for me, it all mostly started from the desire to have a single MariaDB and Postgresql container holding all the databases. Not sure if I could achieve the same result with different compose files, perhaps I can, bit never had the need.

I actually find my setup super comfortable to use

[–] [email protected] 2 points 1 year ago

I was thinking about that just today, I have something like 30+ services running on a single compose file and maintenance is slowly becoming hard. Probably moving to multiple compose file.

[–] [email protected] 2 points 1 year ago

I have multiple files but a single stack. I use an alias to call compose like this:

docker compose -f file1.yaml -f file2.yaml

Etc.

[–] [email protected] 2 points 1 year ago

I’ve thought about going that route, but ultimately decided to adopt something like portainer.io. My thought process behind it was that some projects within each category may have overlapping dependencies and so I’d end up with multiple entries for a particular dependency in the same file which I didn’t like.

I don’t expose services to the internet from my home lab, so I generally just add host entries manually to each of my computers so that I don’t have to type in ip and port.

[–] [email protected] 1 points 1 year ago (1 children)

i go so far the other way with this personally.. I actually have a seperate LXC for each docker container and a lot of the time i use docker run instead of docker-compose..

I've still not had anyone explain to me why compose is better than a single command line..

[–] [email protected] 3 points 1 year ago

I always thought the compose file is great for maintenance. You can always save the docker run commands elsewhere so at the end of the day it's more of an orchestration choice.

load more comments
view more: next ›