this post was submitted on 28 Apr 2025
238 points (96.5% liked)

Selfhosted

46389 readers
1126 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
238
What is Docker? (lemmy.world)
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]
 

Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I've been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 75 points 1 day ago (5 children)

A program isn't just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.

[–] [email protected] 3 points 18 hours ago (3 children)

But why can I "just install a program" on my windows machine or on my phone and it is that easy?

[–] [email protected] 7 points 15 hours ago (1 children)

You might notice that your Windows installation is like 30 gigabytes and there is a huge folder somewhere in the system path called WinSXS. Microsoft bends over backwards to provide you with basically all the versions of all the shared libs ever, resulting in a system that can run programs compiled from decades ago just fine.

In Linux-land usually we just recompile all of the software from source. Sometimes it breaks because Glibc changed something. Or sometimes it breaks because (extremely rare) the kernel broke something. Linus considers breaking the userspace API one of the biggest no-nos in kernel development.

Even so, depending on what you're doing you can have a really old binary run on your Linux computer if the conditions are right. Windows just makes that surface area of "conditions being right" much larger.

As for your phone, all the apps that get built and run for it must target some kind of specific API version (the amount of stuff you're allowed to do is much more constrained). Android and iOS both basically provide compatibility for that stuff in a similar way that Windows does, but the story is much less chaotic than on Linux and Windows (and even macOS) where your phone app is not allowed to do that much, by comparison.

[–] [email protected] 2 points 15 hours ago

In Linux-land usually we just recompile all of the software from source

That's just incorrect. Apart from 3 guys who have no better things to do no one in "Linux-land" does that.

[–] [email protected] 1 points 15 hours ago

Caveat: I am not a programmer, just an enthusiast. Windows programs typically package all of the dependency libraries up with each individual program in the form of DLLs (dynamic link library). If two programs both require the same dependency they just both have a local copy in their directory.

[–] [email protected] 1 points 15 hours ago* (last edited 15 hours ago)

In case of phones, there's less of a myriad of operating systems and libraries.

A typical Android app is (eventually) Java with some bundled dependencies and ties in to known system endpoints (for stuff like notifications and rendering graphics).

For windows these installers are usually responsible for getting the dependencies. Which is why some installers are enormous (and most installers of that size are web installers, so it looks smaller).

Docker is more aimed at developers and server deployment, you don't usually use docker for desktop applications. This is the area where you want to skip inconsistencies between environments, especially if these are hard to debug.

[–] [email protected] 8 points 1 day ago (2 children)

Docker is not a virtual machine, it's a fancy wrapper around chroot

[–] [email protected] 4 points 16 hours ago

I'm aware of that, but OP requested "explain like I'm stupid" so I omitted that detail.

[–] [email protected] 10 points 1 day ago (2 children)

No, chroot is kind of its own thing

It is just a kernel namespace

[–] [email protected] 2 points 21 hours ago

Yes, technically chroot and jails are wrappers around kernel namespaces / cgroups and so is docker.

But containers were born in a post chroot era as an attempt at making the same functionality much more user friendly and focused more on bundling cgroups and namespaces into a single superset, where chroot on its own is only namespaces. This is super visible in early docker where you could not individually dial those settings. It’s still a useful way to explain containers in general in the sense that comparing two similar things helps you define both of them.

Also cgroups have evolved alongside containers at this point and work rather differently now compared to 18 years ago when cgroups were invented and this differentiation mattered more than now. We’re at the point where differentiation between VMs and Containers is getting really hard since both more and more often rely on the same kernel features that were developed in recent years on top of cgroups

[–] [email protected] 1 points 23 hours ago

a chroot is different, but it’s an easy way to get an idea of what docker is:

it also contains all the libraries and binaries that reference each other, such that if you call commands they use the structure of the chroot

this is far more relevant to a basic understanding of what docker does than explaining kernel namespaces. once you have the knowledge of “shipping around applications including dependencies”, then you can delve into isolation and other kinds of virtualisation

[–] [email protected] 3 points 1 day ago* (last edited 20 hours ago) (7 children)

Isn't all of this a complete waste of computer resources?

I've never used Docker but I want to set up a Immich server, and Docker is the only official way to install it. And I'm a bit afraid.

Edit: thanks for downvoting an honest question. Wtf.

[–] [email protected] 2 points 8 hours ago

It's not. Imagine Immich required library X to be at Y version, but another service on the server requires it to be at Z version. That will be a PitA to maintain, not to mention that getting a service to run at all can be difficult due to a multitude of reasons in which your system is different from the one where it was developed so it might just not work because it makes certain assumptions about where certain stuff will be or what APIs are available.

Docker eliminates all of those issues because it's a reproducible environment, so if it runs on one system it runs on another. There's a lot of value in that, and I'm not sure which resource you think is being wasted, but docker is almost seamless without not much overhead, where you won't feel it even on a raspberry pi zero.

[–] [email protected] 22 points 1 day ago* (last edited 1 day ago)

If it were actual VMs, it would be a huge waste of resources. That’s really the purpose of containers. It’s functionally similar to running a separate VM specific to every application, except you’re not actually virtualizing an entire system like you are with a VM. Containers are actually very lightweight. So much so, that if you have 10 apps that all require database backends, it’s common practice to just run 10 separate database containers.

[–] [email protected] 7 points 21 hours ago

The main "wasted" resources here is storage space and maybe a bit of RAM, actual runtime overhead is very limited. It turns out, storage and RAM are some of the cheapest resources on a machine, and you probably won't notice the extra storage or RAM usage.

VMs are heavy, Docker containers are very light. You get most of the benefits of a VM with containers, without paying as high of a resource cost.

[–] [email protected] 11 points 1 day ago

Docker has very little overhead

[–] [email protected] 14 points 1 day ago

On the contrary. It relies on the premise of segregating binaries, config and data. But since it is only running one app, then it is a bare minimum version of it. Most containers systems include elements that also deduplicate common required binaries. So, the containers are usually very small and efficient. While a traditional system's libraries could balloon to dozens of gigabytes, pieces of which are only used at a time by different software. Containers can be made headless and barebones very easily. Cutting the fat, and leaving only the most essential libraries. Fitting in very tiny and underpowered hardware applications without losing functionality or performance.

Don't be afraid of it, it's like Lego but for software.

[–] [email protected] 6 points 1 day ago

No because docker is not actually a VM

[–] [email protected] 5 points 1 day ago (1 children)

I've had immich running in a VM as a snap distribution for almost a year now and the experience has been leaps and bounds easier than maintaining my own immich docker container. There have been so many breaking changes over the few years I've used it that it was just a headache. This snap version has been 100% hands off "it just works".

https://snapcraft.io/immich-distribution

[–] [email protected] 1 points 1 day ago (2 children)

Interesting idea (snap over docker).

I wonder, does using snap still give you the benefit of not having to maintain specific versions of 3rd party software?

[–] [email protected] 5 points 1 day ago

I don't know too much about snap (I literally haven't had to touch my immich setup) but as far as I remember when I set it up that was snap's whole thing - it maintains and updates itself with minimal administrative oversight.

[–] Colloidal 3 points 23 hours ago (1 children)

Snap is like Flatpak. So it will store and maintain as many versions of dependencies as your applications need. So it gives you that benefit by automating the work for you. The multiple versions still exist if your apps depend in different versions.

[–] [email protected] 4 points 20 hours ago (1 children)

Thanks.

Now to see if there’s a flatpack because fuck snap.

[–] [email protected] 1 points 8 hours ago
[–] [email protected] 2 points 1 day ago

Beat me to it.