this post was submitted on 28 Apr 2025
218 points (96.2% liked)

Selfhosted

46389 readers
875 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
218
What is Docker? (lemmy.world)
submitted 23 hours ago* (last edited 22 hours ago) by [email protected] to c/[email protected]
 

Hi! Im new to self hosting. Currently i am running a Jellyfin server on an old laptop. I am very curious to host other things in the future like immich or other services. I see a lot of mention of a program called docker.

search this on The internet I am still Not very clear what it does.

Could someone explain this to me like im stupid? What does it do and why would I need it?

Also what are other services that might be interesting to self host in The future?

Many thanks!

EDIT: Wow! thanks for all the detailed and super quick replies! I've been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 1 hour ago

It's the platform that runs all of your services in containers. This means they are separated from your system.

Also what are other services that might be interesting to self host in The future?

Nextcloud, the Arr stack, your future app, etc etc.

[–] [email protected] 1 points 1 hour ago

Now compare Docker vs LXC vs Chroot vs Jails and the performance and security differences. I feel a lot of people here are biased without knowing the differences (pros and cons).

[–] CodeBlooded 9 points 3 hours ago

Docker enables you to create instances of an operating system running within a “container” which doesn’t access the host computer unless it is explicitly requested. This is done using a Dockerfile, which is a file that describes in detail all of the settings and parameters for said instance of the operating system. This might be packages to install ahead of time, or commands to create users, compile code, execute code, and more.

This is instance of an operating system, usually a “server,” is great because you can throw the server away at any time and rebuild it with practically zero effort. It will be just like new. There are many reasons to want to do that; who doesn’t love a fresh install with the bare necessities?

On the surface (and the rabbit hole is deep!), Docker enables you to create an easily repeated formula for building a server so that you don’t get emotionally attached to a server.

[–] [email protected] 9 points 8 hours ago (1 children)
[–] [email protected] 2 points 3 hours ago

+1 for Techworld with Nana

[–] [email protected] 9 points 9 hours ago* (last edited 9 hours ago) (1 children)

This thread:

Jails make docker look like windows 11 with copilot.

[–] [email protected] 3 points 4 hours ago

Those are Apples and Oranges

[–] [email protected] 16 points 13 hours ago

EDIT: Wow! thanks for all the detailed and super quick replies! I've been reading all the comments here and am concluding that (even though I am currently running only one service) it might be interesting to start using Docker to run all (future) services seperately on the server!

This is pretty much what I've started doing. Containers have the wonderful benefit that if you don't like it, you just delete it. If you install on bare metal (at least in Linux) you can end up with a lot of extra packages getting installed and configured that could affect your system in the future. With containers, all those specific extras are bundled together and removed at the same time without having any effect on your base system, so you're always at your clean OS install.

I will also add an irritation with docker containers as well, if you create something in a container that isn't kept in a shared volume, it gets destroyed when starting the container again. The container you use keeps the maintainers setup, for instance I do occasional encoding of videos in a handbrake container, I can't save any profiles I make within that container because it will get wiped next time I restart the container since it's part of the container, not on any shared volume.

[–] [email protected] 9 points 13 hours ago

I don't think I really understood docker until I watched this video which takes you through building up a docker-like container system from scratch. It's very understandable and easy to follow if you have a basic understanding of Linux operating systems. I recommend it to anyone I know working with docker:

https://youtu.be/8fi7uSYlOdc

Alternative Invidious link: https://yewtu.be/watch?v=8fi7uSYlOdc

[–] [email protected] 70 points 20 hours ago (20 children)

A program isn't just a program: in order to work properly, the context in which it runs — system libraries, configuration files, other programs it might need to help it such as databases or web servers, etc. — needs to be correct. Getting that stuff figured out well enough that end users can easily get it working on random different Linux distributions with arbitrary other software installed is hard, so developers eventually resorted to getting it working on their one (virtual) machine and then just (virtually) shipping that whole machine.

[–] [email protected] 3 points 11 hours ago (3 children)

But why can I "just install a program" on my windows machine or on my phone and it is that easy?

[–] [email protected] 5 points 8 hours ago (1 children)

You might notice that your Windows installation is like 30 gigabytes and there is a huge folder somewhere in the system path called WinSXS. Microsoft bends over backwards to provide you with basically all the versions of all the shared libs ever, resulting in a system that can run programs compiled from decades ago just fine.

In Linux-land usually we just recompile all of the software from source. Sometimes it breaks because Glibc changed something. Or sometimes it breaks because (extremely rare) the kernel broke something. Linus considers breaking the userspace API one of the biggest no-nos in kernel development.

Even so, depending on what you're doing you can have a really old binary run on your Linux computer if the conditions are right. Windows just makes that surface area of "conditions being right" much larger.

As for your phone, all the apps that get built and run for it must target some kind of specific API version (the amount of stuff you're allowed to do is much more constrained). Android and iOS both basically provide compatibility for that stuff in a similar way that Windows does, but the story is much less chaotic than on Linux and Windows (and even macOS) where your phone app is not allowed to do that much, by comparison.

[–] [email protected] 1 points 8 hours ago

In Linux-land usually we just recompile all of the software from source

That's just incorrect. Apart from 3 guys who have no better things to do no one in "Linux-land" does that.

[–] [email protected] 1 points 8 hours ago

Caveat: I am not a programmer, just an enthusiast. Windows programs typically package all of the dependency libraries up with each individual program in the form of DLLs (dynamic link library). If two programs both require the same dependency they just both have a local copy in their directory.

[–] [email protected] 1 points 8 hours ago* (last edited 8 hours ago)

In case of phones, there's less of a myriad of operating systems and libraries.

A typical Android app is (eventually) Java with some bundled dependencies and ties in to known system endpoints (for stuff like notifications and rendering graphics).

For windows these installers are usually responsible for getting the dependencies. Which is why some installers are enormous (and most installers of that size are web installers, so it looks smaller).

Docker is more aimed at developers and server deployment, you don't usually use docker for desktop applications. This is the area where you want to skip inconsistencies between environments, especially if these are hard to debug.

[–] [email protected] 8 points 18 hours ago (2 children)

Docker is not a virtual machine, it's a fancy wrapper around chroot

[–] [email protected] 4 points 9 hours ago

I'm aware of that, but OP requested "explain like I'm stupid" so I omitted that detail.

[–] [email protected] 10 points 17 hours ago (2 children)

No, chroot is kind of its own thing

It is just a kernel namespace

[–] [email protected] 2 points 14 hours ago

Yes, technically chroot and jails are wrappers around kernel namespaces / cgroups and so is docker.

But containers were born in a post chroot era as an attempt at making the same functionality much more user friendly and focused more on bundling cgroups and namespaces into a single superset, where chroot on its own is only namespaces. This is super visible in early docker where you could not individually dial those settings. It’s still a useful way to explain containers in general in the sense that comparing two similar things helps you define both of them.

Also cgroups have evolved alongside containers at this point and work rather differently now compared to 18 years ago when cgroups were invented and this differentiation mattered more than now. We’re at the point where differentiation between VMs and Containers is getting really hard since both more and more often rely on the same kernel features that were developed in recent years on top of cgroups

load more comments (1 replies)
load more comments (18 replies)
[–] [email protected] 74 points 22 hours ago (1 children)

Please don't call yourself stupid. The common internet slang for that is ELI5 or "explain [it] like I'm 5 [years old]".

I'll also try to explain it:

Docker is a way to run a program on your machine, but in a way that the developer of the program can control.
It's called containerization and the developer can make a package (or container) with an operating system and all the software they need and ship that directly to you.

You then need the software docker (or podman, etc.) to run this container.

Another advantage of containerization is that all changes stay inside the container except for directories you explicitly want to add to the container (called volumes).
This way the software can't destroy your system and you can't accidentally destroy the software inside the container.

[–] [email protected] 18 points 22 hours ago (1 children)

It's basically like a tiny virtual machine running locally.

[–] [email protected] 28 points 18 hours ago (1 children)

I know it's ELI5, but this is a common misconception and will lead you astray. They do not have the same level of isolation, and they have very different purposes.

For example, containers are disposable cattle. You don't backup containers. You backup volumes and configuration, but not containers.

Containers share the kernel with the host, so your container needs to be compatible with the host (though most dependencies are packaged with images).

For self hosting maybe the difference doesn't matter much, but there is a difference.

[–] [email protected] 10 points 14 hours ago (1 children)

A million times this. A major difference between the way most vms are run and most containers are run is:

VMs write to their own internal disk, containers should be immutable and not be able to write to their internal filesystem

You can have 100 identical containers running and if you are using your filesystem correctly only one copy of that container image is on your hard drive. You have have two nearly identical containers running and then only a small amount of the second container image (another layer) is wasting disk space

Similarly containers and VMs use memory and cpu allocations differently and they run with extremely different security and networking scopes, but that requires even more explanation and is less relevant to self hosting unless you are trying to learn this to eventually get a job in it.

[–] [email protected] 1 points 5 hours ago (1 children)

containers should be immutable and not be able to write to their internal filesystem

This doesn't jive with my understanding. Containers cannot write to the image. The image is immutable. However, a running container can write to its filesystem, but those changes are ephemeral, and will disappear if the container stops.

[–] [email protected] 1 points 4 hours ago

This is why I said “most containers most of the time should”. It’s a bad practice to write to the inside of the container and a better practice to treat them as immutable. You can go as far as actively preventing them from writing to themselves when you build them or in certain container runtimes, but this is not usually how they work by default.

Also a container that is stopped and restarted will not lose its internal changes in most runtimes. The container needs to be deleted and recreated from the image to do that

[–] [email protected] 9 points 18 hours ago

Docker is a set of tools, that make it easier to work with some features of the Linux kernel. These kernel features allow several degrees of separating different processes from each other. For example, by default each Docker container you run will see its own file system, unable to interact (read: mess) with the original file system on the host or other Docker container. Each Docker container is in the end a single executable with all its dependencies bundled in an archive file, plus some Docker-related metadata.

[–] [email protected] 27 points 23 hours ago

It’s a container service. Containers are similar to virtual machines but less separate from the host system. Docker excels in creating reproducible self contained environments for your applications. It’s not the simplest solution out there but once you understand the basics it is a very powerful tool for system reliability.

[–] [email protected] 7 points 18 hours ago (5 children)

I've never posted on Lemmy before. I tried to ask this question of the greater community but I had to pick a community and didn't know which one. This shows up as lemmy.world but that wasn't an option.

Anyway, what I wanted to know is why do people self host? What is the advantage/cost. Sorry if I'm hijacking. Maybe someone could just post a link or something.

[–] [email protected] 11 points 14 hours ago

People are talking about privacy but the big reason is that it gives you, the owner, control over everything quickly without ads or other uneeded stuff. We are so used to apps being optomized for revenue and not being interoperable with other services that it's easy to forget the single biggest advantage of computers which is that programs and apps can work together quickly and quietly and in the background. Companies provide products, self-hosting provides tools.

[–] [email protected] 18 points 17 hours ago* (last edited 17 hours ago)

It usually comes down to privacy and independence from big tech, but there are a ton of other reasons you might want to do it. Here are some more:

  • preservation - no longer have to care if Google kills another service
  • cost - over time, Jellyfin could be cheaper than a Netflix sub
  • speed - copying data on your network is faster than to the internet
  • hobby - DIY is fun for a lot of people

For me, it's a mix of several of reasons.

[–] [email protected] 10 points 17 hours ago (1 children)

Anyway, what I wanted to know is why do people self host?

Wow. That's a whole separate thread on it's on. I selfhost a lot of my services because I am a staunch privacy advocate, and I really have a problem with corporations using my data to further bolster their profit margins without giving me due compensation. I also self host because I love to tinker and learn. The learning aspect is something I really get in to. At my age it is good to keep the brain active and so I self host, create bonsai, garden, etc. I've always been into technology from the early days of thumbing through Pop Sci and Pop Mech magazines, which evolved into thumbing through Byte mags.

load more comments (1 replies)
[–] [email protected] 4 points 14 hours ago

Anyway, what I wanted to know is why do people self host?

For the warm and fuzzy feeling I get when I know all my documents, notes, calendars, contacts, passwords, movies/shows/music, videos, pictures and much more are stored safely in my basement and belong to me.

Nobody is training their AI on it, nobody is trying to use them for targetted ads, nobody is selling them. Just for me.

load more comments (1 replies)
load more comments
view more: next ›