Update in-docker-compose container with new env_file - docker

I have the following docker-compose configuration:
version: '2'
services:
nginx:
image: 'nginx:latest'
expose:
- '80'
- '8080'
container_name: nginx
ports:
- '80:80'
- '8080:8080'
volumes:
- '/home/ubuntu/nginx.conf:/etc/nginx/nginx.conf'
networks:
- default
restart: always
inmates:
image: 'xxx/inmates:mysql'
container_name: 'inmates'
expose:
- '3000'
env_file: './inmates.env'
volumes:
- inmates_documents_images:/data
- inmates_logs:/logs.log
networks:
- default
restart: always
we19:
image: 'xxx/we19:dev'
container_name: 'we19'
expose:
- '3000'
env_file: './we19.env'
volumes:
- we19_logs:/logs.log
networks:
- default
restart: always
desktop:
image: 'xxx/desktop:dev'
container_name: 'desktop'
expose:
- '3000'
env_file: './desktop.env'
volumes:
- desktop_logs:/logs.log
networks:
- default
restart: always
volumes:
inmates_documents_images:
inmates_logs:
desktop_logs:
we19_logs:
Assume I did docker-compose up -d --buiild.
Now the 4 containers (services) are runnig.
Now, I want to update ./desktop.env file with new content. Is there any possible way to reset only desktop container with the new env file? Or docker-compose restart is neccessary?
Basically I'm trying to restart only desktop container with the new env file but keep all 3 others container up running without restarting them.

Extract from docker-compose up --help
[...]
If there are existing containers for a service, and the service's configuration or image was changed after the container's creation, docker-compose up picks up the changes by stopping and recreating the containers (preserving mounted volumes). To prevent Compose from picking up changes, use the --no-recreate flag.
[...]
Usage: up [options] [--scale SERVICE=NUM...] [SERVICE...]
[...]
The following command should do the trick in your case.
docker-compose up -d desktop
If not, see the documentation for other options you can use to meet your exact requirement (e.g. --force-recreate, --renew-anon-volumes, ...)

Related

Trying to connect all my docker containers to a separate MariaDB container

So, I've setup several container apps that use MariaDB as their db backend, using docker-compose.
Containers are setup as needed and therefore MariaDB gets installed each time on every container that uses the db.
For example, I have some containers (PHPMyAdmin, NGiNX-PM, etc.) that use MariaDB, and they, in turn, have a version of it installed within their container. I also have a separate container (MariaDB) that I would rather have shared amongst the other containered apps and, thereby, I'd only have to maintain one version of the db.
I've searched for a solution, but no luck. Needless to say, I'm a noob at docker.
The only thing I can come up with is that all the apps need to be installed through the same docker-compose.yaml file to use the same db? That would make for a very long file if I had many containers running, and I'd prefer to have a directory per app and all the app's contents available in this one location.
I'm sure there is a way, I just haven't been able to figure it out.
So this is what I've tried:
The following setup is what I've tried but I am unable to get it to work:
(/docker/apps/mariadb/mariadb.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# MariaDB (docker-compose -f mariadb.yml up -d) #
#############################################################################################
mariadb:
image: jsurf/rpi-mariadb:latest
restart: unless-stopped
environment:
- TZ=${TIMEZONE}
- MYSQL_DATABASE=dockerApps
- MYSQL_USER=root
- MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
- MYSQL_PASSWORD=${MYSQL_PASSWORD}
volumes:
- $HOME/docker/apps/mariadb/db:/var/lib/mysql
expose:
- '3306'
networks:
- NET
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
depends_on:
- mariadb
(/docker/apps/phpmyadmin/phpmyadmin.yml)
version: "3.9"
networks:
NET:
external: true
services:
#############################################################################################
# phpMyAdmin (docker-compose up -d -OR- docker-compose -f phpmyadmin.yml up -d) #
#############################################################################################
phpmyadmin:
image: phpmyadmin:latest
container_name: phpMyAdmin
restart: unless-stopped
environment:
PMA_HOST: mariadb
PMA_USER: root
PMA_PASSWORD: ${MYSQL_PASSWORD}
volumes:
# Must add ServerName directive to end of file "ServerName 127.0.0.1"
- $HOME/docker/apps/phpmyadmin/apache2.conf:/etc/apache2/apache2.conf
ports:
- '8004:80'
networks:
- NET
Any help in this matter is greatly appreciated.
Ok, so after some more reading and testing, I've found the answer to my issue. I was assuming that "depends_on" was supposed to connect the containers, somehow. Not true!
I found that "external_links" is the correct way of connecting them.
So, my final docker-compose file looks like this:
(/docker/apps/nginxpm/nginxpm.yml)
version: '3.9'
networks:
NET:
external: true
services:
#############################################################################################
# NGiNX Proxy Manager (docker-compose -f nginxpm.yml up -d) #
#############################################################################################
nginxpm:
container_name: NGiNX_Proxy_Manager
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- NET
external_links:
- mariadb

New containers accessing volume on preexisting container

I have a 'master' container, that should be already running when starting all the others.
In it i have a conf/ directory, that this service is monitoring and applying the relevant changes.
How can i have each new container drop a file in this directory?
real scenario:
given my docker-compose.yml below, i want each service (portainer, whoami, apache) to drop a .yml file in the "./traefik/conf/:/etc/traefik/conf/" path mapping of the traefik service.
docker-compose.yml
version: "3.5"
services:
traefik:
image: traefik
env_file: ./traefik/env
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/conf/:/etc/traefik/conf/
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
portainer:
image: portainer/portainer
depends_on: [traefik]
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
whoami:
image: containous/whoami
depends_on: [traefik]
portainer.traefik.yml
http:
routers:
portainer:
entryPoints: [http]
middlewares: [redirect-to-http]
service: portainer-preauth#docker
rule: Host(`portainer.docker.mydomain`)
whoami.traefik.yml
http:
routers:
whoami:
entryPoints: [http]
middlewares: [redirect-to-http]
service: whoami-preauth#docker
rule: Host(`whoami.docker.mydomain`)
Where are the files portainer.traefik.yml and whoami.traefik.yml
located? If they are on host machine, you can directly copy them to
./traefik/conf/. – Shashank V
the thing is i cant have all files in traefik/conf.
this would require manually dropping a file there every time i create a new image.
i believe that every service should be responsible for its own files.
also, when traefik starts and finds files of those other services that haven't started yet, it logs lots of errors.
to avoid this behavior, i would like to put the file there only when the container is started.
below is is the project file structure.
You can use a volume across all services. Just define it in your docker-compose.yml and assign it to each service:
version: "3.5"
services:
traefik:
image: traefik
env_file: ./traefik/env
restart: unless-stopped
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./traefik/conf/:/etc/traefik/conf/
- ./traefik/traefik.yml:/etc/traefik/traefik.yml
- foo:/path/to/share/
portainer:
image: portainer/portainer
depends_on: [traefik]
command: --no-auth -H unix:///var/run/docker.sock
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- foo:/another/path/to/share/
whoami:
image: containous/whoami
depends_on: [traefik]
volumes:
- foo:/and/another/path/
volumes:
foo:
driver: local
This is the equivalent to the --volumes-from feature of "plain" Docker. Or at least, what comes closest to it.
Your master container would then have to use the same volume. If this container doesn't run within the same Docker Compose context, you have to define this volume externally before.

Docker compose up does not restart on reboot

I have successfully created docker containers and they work when loaded using:
sudo docker-compose up -d
The yml is as follows:
services:
nginx:
build: ./nginx
restart: always
ports:
- "80:80"
volumes:
- ./static:/static
links:
- node:node
node:
build: ./node
restart: always
ports:
- "8080:8080"
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
Am I supposed to create a service for this. Reading the documentation I thought that the containers would reload in restart was set to always.
FYI: the yml is inside a projects directory on the home of the base user: ubuntu.
I tried checking for solutions in stack but could not find anything appropriate. Thanks.

How to run Docker container in it's own network

Today I switched from "Docker Toolbox" to "Docker for Mac", because Docker now has finally write-access to my User directory (which doesn't worked with "Docker Toolbox") - Yay!
But this change also includes that all containers now running under my localhost and not under Docker's IP as before (e.g. 192.168.99.100).
Since my localhost listens to various ports by default (80, 443, ...) and I don't want to always add new created ports, that doesn't conflict with the standard one's, to my local dev domains (e.g. example.dev:8443), I wonder how to run my containers as before.
I read about network configs and tried a lot of things (creating a new host network, exposing ports with an IP in front of it, ...), but didn't got it working.
What kind of config do I need to run my app container with the IP 192.168.99.100? Thats my docker-compose.yml so far.
version: '2'
services:
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- mysql
- redis
- memcached
ports:
- 80:80
- 443:443
- 22:22
- 3000:3000
- 3001:3001
volumes:
- ./app/:/app/
- /tmp/debug/:/tmp/debug/
- ./:/docker/
volumes_from:
- storage
# cap and privileged needed for slowlog
cap_add:
- SYS_PTRACE
privileged: true
env_file:
- etc/environment.yml
- etc/environment.development.yml
mysql:
build:
context: docker/mysql/
dockerfile: MariaDB-10
ports:
- 3306:3306
volumes_from:
- storage
volumes:
- ./data/mysql:/var/lib/mysql
- /tmp/debug/:/tmp/debug/
env_file:
- etc/environment.yml
- etc/environment.development.yml
redis:
build: docker/redis/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
memcached:
build: docker/memcached/
volumes_from:
- storage
env_file:
- etc/environment.yml
- etc/environment.development.yml
storage:
build: docker/storage/
volumes:
- /storage
You need to declare "networks:" for each of your services:
e.g.
version: '2'
services:
app:
image: xxxx:xxx
ports:
- "80:80"
networks:
- my-network
mysql:
image: xxxx:xxx
networks:
- my-network
networks:
my-network:
driver: bridge
Then from side your app configuration, you can use "mysql" as the hostname of database server.
You can define a network in your compose file, then add any services to the network.
https://docs.docker.com/compose/networking/
But I would suggest you just use different ports now that you are running natively. I.e. 8080:80

Increasing docker volume size using docker-compose

I'm working on deploying a Flask web app with Docker (using docker-compose). Users upload files which can be particularly big (>1GB). My question is -- do I have to be worried about the container running out of space? I've read that containers have a default max size of 10GB, and I will definitely exceed that quickly. If I create a volume in the "flask-app/uploads" directory where all the files are stored, does that solve my problem or is the volume just another container with the same size limitations? Is there any way I can just store everything that gets uploaded to "flask-app/uploads" to the host machine so nothing get written to the container?
Here is my docker-compose.yml file for reference:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/flask-app/static
- /usr/src/flask-app/uploads (??)
env_file: .env
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
data:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
image: postgres:latest
volumes_from:
- data
ports:
- "5432:5432"
Yes, this is a common practice, you can use a host volume. Change the volume line to - ./uploads:/usr/src/flask-app/uploads.

Resources