Conflict with two docker containers in diffirent directories - docker

May be somebody had such specific problem... There are two web applications in different directories, both in docker containers. Linux (centos). When I run the first application (docker-compose up -d) everything works fine. If I launch the second application from another directory, then the first one docker container launched falls. Why? The names of the containers are different, the ports forwarded in the docker are also different.
First app config docker-compose.yml
services:
web:
container_name: myapp-nginx
image: nginx:latest
ports:
- "8000:80"
- "443:443"
volumes:
- ./:/myapp
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- php
php:
build: .
container_name: myapp-php-fpm
image: php:7.4-fpm
volumes:
- ./:/myapp
- ./logs:/myapp/logs
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- mysql:db
mysql:
image: mariadb:latest
container_name: myapp-mysql
volumes:
- /opt/myapp/data:/var/lib/mysql
env_file:
- mysql.env
restart: unless-stopped
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: myapp-phpmyadmin
environment:
- MAX_EXECUTION_TIME=600
- UPLOAD_LIMIT=800M
- PMA_HOST=localhost
- PMA_PORT=3306
- PMA_ARBITRARY=1
ports:
- "80:80"
links:
- mysql:db
Second app docker-compose.yml
version: '3'
services:
web:
container_name: client-nginx
image: nginx:latest
ports:
- "20203:81"
volumes:
- ./:/myapp_client
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- php
php:
build: .
container_name: client-php-fpm
image: php:7.4-fpm
volumes:
- ./:/myapp_client
- ./logs:/myapp_client/logs
- ./php-fpm/php-ini-overrides.ini:/etc/php/7.4/fpm/conf.d/99-overrides.ini
links:
- mysql:db
mysql:
image: mariadb:latest
container_name: client-mysql
volumes:
- /opt/myapp_client/data:/var/lib/mysql
env_file:
- mysql.env
restart: unless-stopped
ports:
- "20202:3307"
phpmyadmin:
image: phpmyadmin/phpmyadmin
container_name: client-phpmyadmin
environment:
- MAX_EXECUTION_TIME=600
- UPLOAD_LIMIT=800M
- PMA_HOST=localhost
- PMA_PORT=3307
- PMA_ARBITRARY=1
ports:
- "20204:82"
links:
- mysql:db

Related

Adguard Home docker compose config and db missing

im trying to run adguard with docker compose. I created a lot more containers with docker compose but this one is not creating any files into the mapped folder.
I tried to rebuild the docker command of the official instruction but any time i recreate the container i end up at the setup page and all settings are deleted.
Any ideas?
This is my compose file:
version: "3"
volumes:
homematic_data:
external: true
networks:
homematic:
services:
samba:
image: dperson/samba
container_name: samba
restart: always
ports:
- "137:137/udp"
- "138:138/udp"
- "139:139/tcp"
- "445:445/tcp"
healthcheck:
disable: true
environment:
- TZ='Europe/Berlin'
- WORKGROUP=workgroup
- RECYCLE=false
- USER1=pi;PASSWORD;1000
- SHARE1=homematic_docker;/shares/homematic_docker;yes;no;yes;pi;pi
volumes:
- /home/pi:/shares/homematic_docker
networks:
- homematic
promtail:
image: grafana/promtail:latest
container_name: promtail
volumes:
- /var/log:/var/log
- ./promtail:/etc/promtail
restart: unless-stopped
command: -config.file=/etc/promtail/promtail-config.yml
networks:
- homematic
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
- /:/host:ro,rslave
command:
- '--path.rootfs=/host'
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
ports:
- 9100:9100
networks:
- homematic
restart: always
###################### portainer
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: unless-stopped
security_opt:
- no-new-privileges:true
volumes:
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./portainer:/data
ports:
- 9000:9000
adguard:
image: adguard/adguardhome
container_name: adguard
restart: unless-stopped
ports:
- 53:53/tcp
- 53:53/udp
- 67:67/udp
- 69:68/udp
- 80:80/tcp
- 443:443/tcp
- 443:443/udp
- 3000:3000/tcp
- 853:853/tcp
- 784:784/udp
- 853:853/udp
- 8853:8853/udp
- 5443:5443/tcp
- 5443:5443/udp
# environment:
# - TZ=Europe/Berlin
volumes:
- /home/pi/homematicDocker/adguard/work:/opt/adguardhome/work\
- /home/pi/homematicDocker/adguard/conf:/opt/adguardhome/conf\
# network_mode: host
raspberrymatic:
image: ghcr.io/jens-maus/raspberrymatic:3.67.10.20230117-27abde9
container_name: homematic
hostname: homematic-raspi
privileged: true
restart: unless-stopped
stop_grace_period: 30s
volumes:
- homematic_data:/usr/local:rw
- /lib/modules:/lib/modules:ro
- /run/udev/control:/run/udev/control
ports:
- "8080:80"
- "2001:2001"
- "2010:2010"
- "9292:9292"
- "8181:8181"
networks:
- homematic
Within the folder "/opt/adguardhome/work" I see a folder data with a database inside. After i finished the setup also the folder conf inside the container has a yaml file.
Unfortunately i copied the backslashes of the docker command into the volume mapping, thats was the problem why i didnt get any data. Thank you Mike!

Implementing HSTS on docker dontainers

I have a VM within which I have application running as docker-compose file. how do I implement HSTS on that?
version: "3"
services:
drone-server:
container_name: drone_server
image: drone/drone:2.4
env_file:
- /opt/drone/drone-server.env
volumes:
- /var/lib/drone:/data
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:80"
restart: always
drone-agent:
container_name: drone_agent
image: drone/drone-runner-docker
env_file:
- /opt/drone/drone-agent.env
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: always

Hadoop namenode not recognizing datanodes in docker

I'm new and learning to start a hadoop system using Docker but I have been stuck at one point for weeks and finally I have to ask here. I could launch the containers individually without any problems and the namenode always recognized the running namenodes. However, when I tried to set up a Docker-compose.yml file to launch all containers at once, I got into multiple problems, one of which is that the namenode never recognized the running datanodes. Do you have any suggestions to help me fix it?
Here's my docker-compose file:
version: "3"
services:
base:
image: hpcnube-base-image
hpcnube-namenode:
image: hpcnube-namenode-image
depends_on:
- base
hostname: hpcnube-namenode
container_name: hpcnube-namenode
networks:
- dockerfiles1_hpcnube-net
ports:
- "9870:9870"
hpcnube-resourcemanager:
image: hpcnube-resourcemanager-image
container_name: hpcnube-resourcemanager
depends_on:
- hpcnube-namenode
- hpcnube-dnnm2
- hpcnube-dnnm1
hostname: hpcnube-resourcemanager
networks:
- dockerfiles1_hpcnube-net
ports:
- "8088:8088"
hpcnube-dnnm1:
image: hpcnube-dnnm-image
container_name: hpcnube-dnnm1
hostname: hpcnube-dnnm1
depends_on:
- base
- hpcnube-namenode
networks:
- dockerfiles1_hpcnube-net
#command: "/opt/bd/start-daemons.sh"
hpcnube-dnnm2:
#build: ./DataNode-NodeManager
image: hpcnube-dnnm-image
container_name: hpcnube-dnnm2
hostname: hpcnube-dnnm2
depends_on:
- base
- hpcnube-namenode
networks:
- dockerfiles1_hpcnube-net
#command: "/opt/bd/start-daemons.sh"
hpcnube-checkpoint:
image: hpcnube-checkpointnode-image
hostname: hpcnube-checkpointnode
depends_on:
- base
- hpcnube-namenode
- hpcnube-resourcemanager
- hpcnube-dnnm2
- hpcnube-dnnm1
networks:
- dockerfiles1_hpcnube-net
hpcnube-timeline:
image: hpcnube-timelineserver-image
hostname: hpcnube-timelineserver
depends_on:
- base
- hpcnube-namenode
- hpcnube-resourcemanager
- hpcnube-dnnm2
- hpcnube-dnnm1
- hpcnube-checkpoint
networks:
- dockerfiles1_hpcnube-net
hpcnube-frontend:
build: ./FrontEnd
hostname: hpcnube-frontend
depends_on:
- base
- hpcnube-namenode
- hpcnube-resourcemanager
- hpcnube-dnnm2
- hpcnube-dnnm1
- hpcnube-checkpoint
- hpcnube-timeline
hostname: hpcnube-frontend
networks:
- dockerfiles1_hpcnube-net
ports:
- "2345:22"
networks:
dockerfiles1_hpcnube-net:
driver: bridge

ERROR: In file './docker-compose.yml', service name True must be a quoted string, i.e. 'True'

My docker-compose.yml looks like the below. When i run docker-compose up I get the below error.
ERROR: In file './docker-compose.yml', the service name True must be a quoted string, i.e. 'True'.
version: '3'
services:
db:
restart: always
image: postgres:9.6-alpine
container_name: pleroma_postgres
networks:
- pleroma
volumes:
- ./postgres:/var/lib/postgresql/data
web:
build: .
image: pleroma
container_name: pleroma_web
restart: always
environment:
- VIRTUAL_HOST=<myplaceholderhost>
- VIRTUAL_PORT=4000
- LETSENCRYPT_HOST=<myplaceholderhost>
- LETENCRYPT_EMAIL=<myplaceholderemail>
expose:
- "4000"
volumes:
- ./uploads:/pleroma/uploads
depends_on:
- db
nginx:
image: jwilder/nginx-proxy
container_name: nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /apps/docker-articles/nginx/vhost.d:/etc/nginx/vhost.d
- /apps/docker-articles/nginx/certs:/etc/nginx/certs:ro
- /apps/docker-articles/nginx/html:/usr/share/nginx/html
restart: always
ports:
- "80:80"
- "443:443"
labels:
com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy: "true"
networks:
- pleroma
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:v1.5
container_name: letsencrypt
volumes_from:
- nginx
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /apps/docker-articles/nginx/vhost.d:/etc/nginx/vhost.d
- /apps/docker/articles/nginx/certs:/etc/nginx/certs:rw
- /apps/docker-articles/nginx/html:/usr/share/nginx/html
networks:
pleroma:
My docker version is
Docker version 18.06.1-ce, build e68fc7a
My docker compose version is
docker-compose version 1.23.1, build b02f1306
Running CoreOS version 1911.3.0
I ended up resolving this issue by modifying the nginx and letsencrypt portions of my docker-compose.yml file to be as follows.
nginx:
image: jwilder/nginx-proxy
container_name: nginx
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- /apps/docker-articles/nginx/vhost.d:/etc/nginx/vhost.d
- /apps/docker-articles/nginx/certs:/etc/nginx/certs:ro
- /apps/docker-articles/nginx/html:/usr/share/nginx/html
restart: always
ports:
- "80:80"
- "443:443"
labels:
- "NGINX_PROXY_CONTAINER=true"
networks:
- pleroma
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion:v1.5
container_name: letsencrypt
environment:
- NGINX_PROXY_CONTAINER=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /apps/docker-articles/nginx/vhost.d:/etc/nginx/vhost.d
- /apps/docker/articles/nginx/certs:/etc/nginx/certs:rw
- /apps/docker-articles/nginx/html:/usr/share/nginx/html
It seems "volumes_from" is deprecated in docker-compose v3. As well as I had forgotted quotes around my label and needed to set my environment within letsencrypt.
in CentOS env your .yml file directory must be /usr/local/bin

How to deal with multiple services inside a docker-compose.yml?

I have been using a Microservices architecture for develop my software, and I have running my services using Docker Compose, but my problem is when the new services were created I have to add them into the docker-compose.yml, and then I got about 200+ hundred lines of code inside the docker-compose.yml, and I have around 17 services for now which the services have related each other.
Then my question is "How to manage the docker-compose.yml to be easy to maintain and clean?"
My docker-compose.yml:
version: '2'
services:
mongo:
container_name: mongodb
image: mongo:3.4.7
volumes:
- ./mongo/data:/data/db
ports:
- 54321:27017
networks:
- zensorium_backend
restart: always
command: mongod --smallfiles
golang_oauth:
container_name: golang_oauth
build: .
volumes:
- ./oauth:/go/src/oauth
working_dir: /go/src/oauth
ports:
- 8080:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_account:
container_name: golang_account
build: .
volumes:
- ./account:/go/src/account
working_dir: /go/src/account
ports:
- 8081:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_client:
container_name: golang_client
build: .
volumes:
- ./client:/go/src/client
working_dir: /go/src/client
ports:
- 8082:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_mail:
container_name: golang_mail
build: .
volumes:
- ./mail:/go/src/mail
working_dir: /go/src/mail
expose:
- 8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_user:
container_name: golang_user
build: .
volumes:
- ./user:/go/src/user
working_dir: /go/src/user
ports:
- 8083:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_gateway2:
container_name: golang_gateway2
build: .
volumes:
- ./gateway2:/go/src/gateway
working_dir: /go/src/gateway
ports:
- 8084:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_measurement:
container_name: golang_measurement
build: .
volumes:
- ./measurement:/go/src/measurement
working_dir: /go/src/measurement
ports:
- 8085:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_app:
container_name: golang_app
build: .
volumes:
- ./app:/go/src/app
working_dir: /go/src/app
ports:
- 8086:8082
depends_on:
- mongo
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_logging:
container_name: golang_logging
build: .
volumes:
- ./logging:/go/src/logging
working_dir: /go/src/logging
ports:
- 8087:8082
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_notify:
container_name: golang_notify
build: .
volumes:
- ./notify:/go/src/notify
working_dir: /go/src/notify
ports:
- 8088:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
golang_routine:
container_name: golang_routine
build: .
volumes:
- ./routine:/go/src/routine
working_dir: /go/src/routine
ports:
- 8089:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
angular_cli:
container_name: angular_cli
build: ./angular-cli
ports:
- "4200:4200"
networks:
- zensorium_frontend
working_dir: /home/node/webPortal
volumes:
- ./angular-cli/webPortal:/home/node/webPortal
- /home/node/webPortal/node_modules
restart: always
command: npm start
golang_dev:
container_name: golang_dev
build: .
volumes:
- ./dev:/go/src/dev
working_dir: /go/src/dev
ports:
- 8090:8082
env_file:
- ./.api.env
networks:
- zensorium_backend
command: realize start --run
restart: always
networks:
zensorium_backend:
driver: bridge
zensorium_frontend:
driver: bridge
Also you can use the possibilities of yaml to write repeated strings in shorter way. For example :
---
version: '3.4'
x-command: &command bash -c "ls && sleep infinity"
services:
service1:
command: *command
service2:
command: *command
And man can use templates:
https://matthiasnoback.nl/2018/03/defining-multiple-similar-services-with-docker-compose/
my recommendation is to keep the docker-compose.yml as little as possible if you have a lot of services defined. For example you could write the definition inline, use .env file so you dont always have to specify it, remove the container_name. Moving working_dir to Dockerfile. If you are using version 2, you can use the inheritance of service but this feature is not supported in docker-compose 3 and newer... You should also use named volumes.
example of inline docker-compose.yml
version: '2'
services:
mongo:
image: mongo:3.4.7
command: mongod --smallfiles
restart: always
ports: ["54321:27017"]
networks: [zensorium_backend]
volumes: ["my_mongo_data:/data/db"]
golang_oauth:
build: .
command: realize start --run
restart: always
working_dir: /go/src/oauth # could be moved do Dockerfile
ports: ["8080:8082"]
depends_on: [mongo]
environment: ["VAR_1=${VAR_1}","VAR2=${VAR2}"] # using .env file
networks: [zensorium_backend]
volumes: ["my_oauth_data:/go/src/oauth"]
volumes:
my_mongo_data:
my_oauth_data:
networks:
zensorium_backend:
driver: bridge
zensorium_frontend:
driver: bridge
and .env file in same folder as the .yml
VAR_1=mazel
VAR2=tov

Resources