Docker

1247 readers
7 users here now

founded 2 years ago
MODERATORS
1
2
 
 

I noticed docker compose is now telling me I can set COMPOSE_BAKE=true for "better performance".

Does anyone have any experience with this? Is it worth it? I get suspicious when a program tells me "just use this, it has better performance", but it's not the default.

3
4
5
5
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/docker
 
 

I want to be sure the torrent traffic of my transmission docker instance go through my VPN.

I got different interfaces with different vlans on the host. I want to be sure the container created with docker compose use only a specific interface. The interface with the correct vlan has IP 192.168.90.92

I have tested the host connectivity with: curl --interface ethX https://api.ipify.org/ and it's working fine, meaning that public ips are different.

I have tried with the following on the docker compose file:

ports: - 9091:9091 # Web UI port - 192.168.90.92:51413:51413 # Torrent port (TCP) - 192.168.90.92:51413:51413/udp # Torrent port (UDP)

However, the traffic is still coming from the default gateway.

Any idea?

Thanks!

6
 
 

Over the week I've been dealing with the Kinsing virus via Docker on my VPS. I've been learning about it and I've come to find I've been thinking about Docker all wrong with the way that I was using it.

I enjoy using Portainer, so that's a must for me. I know Docker allows you to secure Docker sockets via context; docker context create vps --docker "host=ssh://user@vps".

I would like to use this method, via Portainer (locally) to connect to docker (remote) via SSH. Anyone know of a way to do this? I've been looking around and haven't found much.

7
8
 
 

I recently asked the best way to run my Lemmy bot on my Synology NAS and most people suggested Docker.

I'm currently trying to get it running on my machine in Docker before transferring it over there, but am running into trouble.

Currently, to run locally, I navigate to the folder and type npm start. That executes tsx src/main.ts.

The first thing main.ts does is access argv to detect if a third argument was given, dev, and if it was, it loads in .env.development, otherwise it loads .env, containing environment variables. It puts those variables into a local variable that I then pass around in the bot. I am definitely not tied to this approach if there is a better practice way of doing it.

opening lines of main.ts

import { config } from 'dotenv';

let path: string;

const env = process.argv[2];
if (env && env === 'dev') {
    path = '.env.development';
} else {
    path = '.env';
}

config({
    override: true,
    path
});

const {
    ENVIROMENT_VARIABLE_1
} = process.env as Record<string, string>;

Ideally, I would like a way that I can create a Docker image and then run it with either the .env.development variables or the .env ones...maybe even a completely separate one I decide to create after-the-fact.

Right now, I can't even run it. When I type docker-compose up I get npm start: not found.

My Dockerfile

FROM node:22
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
USER node
COPY . .
CMD "npm start"

My compose.yaml

services:
  node:
    build: .
    image: an-image-name:latest
    environment:
      - ENVIROMENT_VARIABLE_1 = ${ENVIROMENT_VARIABLE_1}

I assume the current problem is something to do with where stuff is being copied to and what the workdir is, but don't know precisely how to address it.

And once that's resolved, I have even less idea how to go about passing through the environment variables.

Any help would be much appreciated.

9
 
 

Hi guys, I have no problem running docker (containers) via CLI, but I though it would be nice try Docker Desktop on my Ubuntu machine. But as soon as I start Docker Desktop it runs "starting Docker engine" indefinitely until my drive is full. The .docker folder then is about 70GB large. I read somewhere that this is the virtual disk size that is being created and that I could change it in the settings, but those are blocked until the engine starting process is finished (which it never does). Anyone else has experienced this?

10
4
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/docker
 
 

I installed ollama for using AI localy on my computer. And now i want to use OpenWebUI. That needs to be installed in docker, so i did that and it should host a page which is gui for openwebui. And its working but i have this problem: https://github.com/open-webui/open-webui/discussions/4376

So i pasted this command as they say:

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

But after that it returned this error code: docker: Error response from daemon: Conflict. The container name "/open-webui" is already in use by container "1cbc8ac3b80f2a6921778964f94eff32541a4540ee6ab5d3335427a0fc8366a8". You have to remove (or rename) that container to be able to reuse that name. See 'docker run --help'.

Can anyone help me with this?

11
8
submitted 7 months ago* (last edited 7 months ago) by [email protected] to c/docker
 
 

I'm experimenting with i2p and a librewolf container setup in Docker compose. However, the i2p web console front end (127.0.0.1:7657) becomes inaccessible if the container itself is restarted. This can be remedied by removing the directories that get created by the volume mappings in the compose file, but this obviously not ideal. Anyone have experience with this problem? I've seen hints from people online suggesting that the data in those directories getting somehow corrupted. I have not yet investigated that further.

version: "3.5"
services:
  i2p_router:
    image:
      geti2p/i2p:latest
    environment:
    - JVM_XMX=256m
    volumes:
    - ./i2phome:/i2p/.i2p
    - ./i2ptorrents:/i2psnark
    ports:
    - 4444:4444
    - 6668:6668
    - 7657:7657
    - 9001:12345
    - 9002:12345/udp

  libre_wolf:
    image:
      linuxserver/librewolf
    ports:
    - 9300:3000
    - 9301:3001

volumes:
  i2phome:
  i2ptorrents:
networks:
  frontend:
    driver: bridge
12
 
 

I have a docker compose file with a bind volume. It basically mounts /media/user/drive/media to the container's /mnt.

It works as expected when /media/user/drive/ is mounted and its media folder has the files I want the container to see.

However, as it's a network drive, the container usually tries to start before it is mounted, so it would throw the error that /media/user/drive/media doesn't exist. So I created an empty folder in /media/user/drive called media while the drive was not mounted so that at least the container starts with the volume /mnt being empty until the network drive gets mounted and all the files appear at /media/user/drive/media.

To my surprise, when the drive gets mounted, even though if I do ls /media/user/drive/media it lists the drive contents correctly, the container still sees /mnt empty.

How would I go about getting the drive files inside the docker container when it automatically starts?

13
 
 

I am hoping that you awesome people can help me with something I've noticed in my Plex logs. Quick notes on my set up:

Mini PC running Ubuntu 22.04, portainer, Plex, arrs and calibre all running in docker. All of these except Plex are using a bridge network that I created in portainer. The PC is connected to the router by ethernet cable, and I have set up a static IP in the router settings. I have also added the static IP info into the network settings in Ubuntu.

The following text is repeated over and over in the Plex Media Server log, about 6 seconds apart. My playback is mostly fine, but I have been experiencing buffering. Regardless this can't be right!.

n.b. I did post elsewhere but I feel that this is not necessarily Plex related and you can likely help with this more technical question.

DEBUG - NetworkInterface: received Netlink message len=88, type=RTM_DELADDR, flags=0x0 Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - NetworkInterface: Netlink address message family=2, index=3, flags=0x0 Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - Network change. Aug 22, 2024 18:51:49.636 [139016208919352] DEBUG - NetworkInterface: Notified of network changed (force=0) Aug 22, 2024 18:51:49.637 [139016208919352] DEBUG - Network change notification but nothing changed.

14
 
 

I have readarr and all other arrs working in Ubuntu with docker portainer. I followed the trash guides and LinuxServerio guides to get me this far. I want to expand my book library, and so I have added calibre.

After having calibre import my book library, I went to readarr to delete the root, and re-add it with the new path to the calibre library. I am having problems with the Calibre Settings on the Add Root page.

The calibre server is listening at 172.18.0.2, port 8081, HTTP. I have created a user account on the calibre "sharing over the net" page. In readarr, I have set the Calibre Host to 172.18.0.2 and the Calibre Port to 8081. When I click save, I get the error Unknown exception: Http request timed out.

Most of the guides I have found are 3 or 4 years old. On one guide the Calibre Host was set to: calibre. That doesn't work. Setting the Host to the IP of my server doesn't work either.

Can any one help? I don't know if I have a permissions or firewall problem, or if I am just doing something wrong. The calibre logs are not showing any issues. I have copied the .yaml files used below.

-services:

  • calibre:
  • image: lscr.io/linuxserver/calibre:latest
  • container_name: calibre
  • security_opt:
    • seccomp:unconfined
  • environment:
    • PUID=1000
    • PGID=1000
    • TZ=Europe/London
    • CLI_ARGS = #optional
  • volumes:
    • /data/calibre:/config
    • /data/Media/calibre:/library
    • /data/Media/books:/upload
  • ports:
    • 8080:8080
    • 8081:8081
  • restart: unless-stopped

-services:

  • readarr:
  • image: lscr.io/linuxserver/readarr:develop
  • container_name: readarr
  • environment:
    • PUID=1000
    • PGID=1000
    • TZ=Europe/London
  • volumes:
    • /data/readarr:/config
    • /data/Media/calibre:/library
    • /data/Media/downloads:/downloads
  • ports:
    • 8787:8787
  • restart: unless-stopped
15
 
 

Hi everyone !

Intro

Was a long ride since 3 years ago I started my first docker container. Learned a lot from how to build my custom image with a Dockerfile, loading my own configurations files into the container, getting along with docker-compose, traefik and YAML syntax... and and and !

However while tinkering with vaultwarden's config and changing to postgresSQL there's something that's really bugging me...

Questions


  • How do you/devs choose which database to use for your/their application? Are there any specific things to take into account before choosing one over another?

  • Does consistency in database containers makes sense? I mean, changing all my containers to ONLY postgres (or mariaDB whatever)?

  • Does it make sense to update the database image regularly? Or is the application bound to a specific version and will break after any update?

  • Can I switch between one over another even if you/devs choose to use e.g. MariaDB ? Or is it baked/hardcoded into the application image and switching to another database requires extra programming skills?

Maybe not directly related to databases but that one is also bugging me for some time now:

  • What's redis role into all of this? I can't the hell of me understand what is does and how it's linked between the application and database. I know it's supposed to give faster access to resources, but If I remember correctly, while playing around with Nextcloud, the redis container logs were dead silent, It seemed very "useless" or not active from my perspective. I'm always wondering "Humm redis... what are you doing here?".

Thanks :)

16
5
submitted 8 months ago* (last edited 8 months ago) by [email protected] to c/docker
 
 

Edit - marking as solved.

  • Remote path: /home/seedit4me
  • local path: /data

This is now working, I don't know why it wasn't before. ‐------------

I have followed the docs and have the recommended folder structures for my Plex and arrs setup.

sonarr has a volume set as /data which gives it access to e.g. /data/usenet/downloads This is working fine with SABnzdb

I am using a seedbox for torrents. Looking at ruTorrent on the seedbox, I can see that the local download folder there is set to: /home/seedit4me/torrents/rtorrent

sonarr is reporting "No files found are eligible for import in:

  • /home/seedit4me/torrents/rtorrent/Completed/tv-sonarr/filename.mkv

I have set a remote path in the download clients page in sonarr as follows:

  • Host - ****.seedit4.me
  • Remote path: /home/seedit4me
  • local path: /data

I have ftp'd the mkv file to actual folder structure:

  • /data/torrents/rtorrent/Completed/tv-sonaar/filename.mkv

The permissions on this file are:

  • -rw-rw-r--

the folder permissions are:

  • drwxrwxr-x 2 myacct myacct 4096 Aug 2 11:41 .
  • drwxrwxr-x 3 myacct myacct

My uid=1000(my acct), same for gid I have set these as the PUID and PGID env variables in sonarr

The log file in sonarr is reporting: |Error|DownloadedEpisodesImportService|Import failed, path does not exist or is not accessible by Sonarr: /home/(removed)/torrents/rtorrent/Completed/tv-sonarr/filename.mkv

Seeing this, i tried mapping /home/(removed)/ to /data/ but that doesn't work either.

Can anyone guide me on what I am doing wrong? I feel like I've checked everything so I can't understand the issue at all.---

17
13
submitted 8 months ago* (last edited 8 months ago) by alexdeathway to c/docker
 
 

I am working on this django docker project template with this certbot setup, Dockerfile

FROM certbot/certbot:v1.27.0

COPY certify-init.sh /opt/
RUN chmod +x /opt/certify-init.sh

ENTRYPOINT ["/opt/certify-init.sh"]

entrypoint

#!/bin/sh

set -e

echo "Getting certificate..."

certbot certonly \
    --webroot \
    --webroot-path "/vol/www/" \
    -d "$DOMAIN" \
    --email $EMAIL \
    --rsa-key-size 4096 \
    --agree-tos \
    --noninteractive

if [ $? -ne 0 ]; then
    echo "Certbot encountered an error. Exiting."
    exit 1
fi

#for copying the certificate and configuration to the volume
if [ -f "/etc/letsencrypt/live/${DOMAIN}/fullchain.pem" ]; then
    echo "SSL cert exists, enabling HTTPS..."
    envsubst '${DOMAIN}' < /etc/nginx/nginx.prod.conf > /etc/nginx/conf.d/default.conf
    echo "Reloading Nginx configuration..."
    nginx -s reload
else
    echo "Certbot unable to get SSL cert,server HTTP only..."
fi


echo "Setting up auto-renewal..."
apk add --no-cache dcron
echo "0 12 * * * /usr/bin/certbot renew --quiet" | crontab -
crond -b

problem with this setup is,certbot exits after initial run of getting the certificate and when it's renew time it require manual intervention.

Now There are two choices

  1. set restart: unless-stopped in docker compose file so it keeps restarting the container and with cron job to renew the certificate when required.

  2. Set cron job in host machine to restart the container.

Are there any other/more option to tackle this situation.

18
19
 
 

cross-posted from: https://lazysoci.al/post/15099881

A surprise Docker update!

20
 
 

I want to have a tomcat in docker and hot deploy my java stuff. Or as it has been requested here 11 years ago: https://stackoverflow.com/questions/31246526/how-to-hot-deploy-java-ee-applications-in-docker-containers and has recently implemented by IntelliJ.

Any directions?

21
 
 

I am running Fedora Server with Docker installed, and it has a folder that connects to my NAS via SMB. I will have all of my Docker files (and Compose configs) stored on my NAS, since it has a lot more storage. I am worried that Docker will glitch out and cause a mess, since my NAS starts ~2 minutes later than my server from a reboot. Is there something that I can do to make sure Docker is able to connect to the SMB share safely?

22
 
 

I have a NAS where I tested to see if some apps could run on it without a server. They overloaded the CPU, so I am now wanting to move them over to a more powerful workstation. I'm used to Compose files/configs, but it seems that my NAS uses plain Docker. Is there a way to extract the configs/long terminal setup commands? I have Portainer installed, if that makes it easier.

23
 
 

cross-posted from: https://lazysoci.al/post/14373858

cross-posted from: https://lazysoci.al/post/14373856

Docker got updated.

24
 
 

cross-posted from: https://lazysoci.al/post/14279205

I built my first image locally and now I'm dancing around my desk to myself in satisfaction. I was anxious AF and so that meant I had a million extra questions along the way and everyone helped me. I'm truly grateful. Thanks for teaching me/holding my hand. I can't put into words my gratitude, but truly, thank you so so much.

25
 
 

cross-posted from: https://lazysoci.al/post/14145485

There's a service that I want to use, however for reasons, it no longer has any builds available. Consequently, I am thinking of building it myself. How does one go about doing that and then afterwards, how do I get it up on Docker hub? Can I just create an account and upload?

view more: next ›