docker stack deploy with mongo volume - docker

i'm starting my docker stack with command:
docker stack deploy --with-registry-auth -c docker-compose.yml app
my docker-compose.yml contains entry for mongo:
mongodb:
image: mongo:3.6
volumes:
- mongodb:/var/lib/mongodb
ports:
- 27017:27017
networks:
- backend
environment:
- AUTH=yes
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
deploy:
replicas: 1
placement:
constraints: [node.hostname == hostname]
networks:
frontend:
backend:
volumes:
mongodb:
im stoping docker stack with docker stack rm app Why i'm losing data in mongo after second start with same command docker stack deploy --with-registry-auth -c docker-compose.yml app ? How to avoid id?
Thanks, smola

ok, i've found answer..
based on image:mongo:3.6 Dockerfile, there are already specified two
volumes: VOLUME /data/db /data/configdb
so in docker-compose.yml need to mount host directories into that volumes:
mongodb:
image: mongo:3.6
volumes:
- /sampledir/db:/data/db <-----
- /sampledir/configdb:/data/configdb <-----
ports:
- 127.0.0.1:27017:27017
networks:
- backend
environment:
- AUTH=yes
logging:
driver: "json-file"
options:
max-size: "100m"
max-file: "5"
deploy:
replicas: 1
placement:
constraints: [node.hostname == hostname]

Related

Docker swarm with reverse proxy, run requests based on request uri path to certain node

I have the following nodes with hostnames docker-php-pos-web-1,docker-php-pos-web-2,docker-php-pos-web-3,and docker-php-pos-web-4 in a docker swarm cluster with caddy proxy configured on distributed mode
I want requests with cron anywhere in the url path to run on docker-php-pos-web-4. An example request would be demo.phppointofsale.com/index.php/ecommerce/cron. If "cron" is not in the url, it would route as normal.
I want to avoid having 2 copies of production_php_point_of_sale_app just for this.
I am already routing to docker-php-pos-web-4 from my load balancer for "cron" in request path, BUT since in docker swarm the mesh network can decide on which node actually "runs" it. I always want docker-php-pos-web-4 to run these tasks
Below is my docker-compose.yml file
version: '3.9'
services:
production_php_point_of_sale_app:
logging:
driver: "local"
deploy:
restart_policy:
condition: any
mode: global
labels:
caddy: "http://*.phppointofsale.com, http://*.phppos.com"
caddy.reverse_proxy.trusted_proxies: "private_ranges"
caddy.reverse_proxy: "{{upstreams}}"
image: phppointofsale/production-app
build:
context: "production_php_point_of_sale_app"
restart: always
env_file:
- production_php_point_of_sale_app/.env
- .env
networks:
- app_network
- mail
caddy_server:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
ports:
- 80:80
networks:
- caddy_controller
- app_network
environment:
- CADDY_DOCKER_MODE=server
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
volumes:
- caddy_data:/data
deploy:
restart_policy:
condition: any
mode: global
labels:
caddy_controlled_server:
caddy_controller:
image: lucaslorentz/caddy-docker-proxy:ci-alpine
networks:
- caddy_controller
- app_network
environment:
- CADDY_DOCKER_MODE=controller
- CADDY_CONTROLLER_NETWORK=10.200.200.0/24
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
caddy_controller:
driver: overlay
ipam:
driver: default
config:
- subnet: "10.200.200.0/24"
app_network:
driver: overlay
mail:
driver: overlay
volumes:
caddy_data: {}

Deploying a docker stack to a swarm fails to start some containers

I'm trying to deploy a compose project to a swarm but after I deploy it I have the problem of not all the services start and some of them keep restarting.
I have the following compose file
version: "3.3"
volumes:
jenkins_home:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/var/jenkins_home'
docker_certs:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/etc/certs'
services:
docker:
image: docker:dind
restart: unless-stopped
privileged: true
volumes:
- jenkins_home:/var/jenkins_home
- docker_certs:/certs/client
ports:
- "2376:2376"
environment:
DOCKER_TLS_CERTDIR: /certs
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
jenkins:
image: git.example.com:8444/devops/docker-services/jenkins
build:
context: ./
dockerfile: services/jenkins.dockerfile
restart: unless-stopped
depends_on:
- "docker"
volumes:
- jenkins_home:/var/jenkins_home
- docker_certs:/certs/client
ports:
- "636:636"
- "8443:8443"
- "3268:3268"
- "50000:50000"
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_CERT_PATH: /certs/client
DOCKER_TLS_VERIFY: 1
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-scheduler:
image: git.example.com:8444/devops/docker-services/icecc-scheduler
build:
context: ./
dockerfile: services/icecc-scheduler.dockerfile
restart: unless-stopped
ports:
- "8765:8765"
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-daemon:
image: git.example.com:8444/devops/docker-services/icecc-daemon
build:
context: ./
dockerfile: services/icecc-daemon.dockerfile
restart: unless-stopped
ports:
- "8766:8766"
- "10245:10245"
depends_on:
- "icecc-scheduler"
deploy:
mode: global
and a swarm with two nodes docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
i6edk9ny6z38krv6m5738uzwu st12873 Ready Active 20.10.12
phnvvy2139wft9innou0uermq * st12874 Ready Active Leader 20.10.12
I have all the images built and pushed to the docker registry
When I run docker stack deploy -c docker-compose.yml build-farm it says it deploys sucessfully though I then list the services
docker stack services build-farm
ID NAME MODE REPLICAS IMAGE PORTS
4z6w98jmswav build-farm_docker replicated 0/1 docker:dind *:2376->2376/tcp
r7xuq4vgc92i build-farm_icecc-daemon global 0/2 git.example.com:8444/devops/docker-services/icecc-daemon:latest *:8766->8766/tcp, *:10245->10245/tcp
20ukipii7wli build-farm_icecc-scheduler replicated 0/1 git.example.com:8444/devops/docker-services/icecc-scheduler:latest *:8765->8765/tcp
37r4pm7jgku5 build-farm_jenkins replicated 1/1 git.example.com:8444/devops/docker-services/jenkins:latest *:636->636/tcp, *:3268->3268/tcp, *:8443->8443/tcp, *:50000->50000/tcp
The icecc scheduler and daemon never start on and the docker:dind service keeps starting and stopping

Docker stack deploy doesn't start services or deploy correctly

I have this compose file
version: "3.3"
volumes:
jenkins_home:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/var/jenkins_home'
certs:
external: false
driver: local
driver_opts:
type: none
o: 'bind'
device: '/etc/certs'
services:
docker:
image: docker:dind
restart: unless-stopped
privileged: true
volumes:
- jenkins_home:/var/jenkins_home
- certs:/certs/client
ports:
- "2376:2376"
environment:
DOCKER_TLS_CERTDIR: /certs
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
jenkins:
image: git.example.com:8444/devops/docker-services/jenkins
build:
context: services/jenkins
args:
ssl_pass: changeit
restart: unless-stopped
depends_on:
- "docker"
volumes:
- jenkins_home:/var/jenkins_home
- certs:/certs/client
ports:
- "8080:8080"
- "8443:8443"
- "3268:3268"
- "50000:50000"
environment:
DOCKER_HOST: tcp://docker:2376
DOCKER_CERT_PATH: /certs/client
DOCKER_TLS_VERIFY: 1
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-scheduler:
image: git.example.com:8444/devops/docker-services/icecc-scheduler
build: services/icecc-scheduler
restart: unless-stopped
network_mode: host
deploy:
mode: replicated
replicas: 1
placement:
constraints: [node.role == manager]
icecc-daemon:
image: git.example.com:8444/devops/docker-services/icecc-daemon
build: services/icecc-daemon
restart: unless-stopped
network_mode: host
deploy:
mode: global
when I run docker stack deploy --compose-file docker-compose.yml build_farm
It claims to start everything successfully. But running docker stack services build_farm I get
ID NAME MODE REPLICAS IMAGE PORTS
tap0zlw086wm build_farm_docker replicated 0/1 docker:dind *:2376->2376/tcp
n13pcmy8zpip build_farm_icecc-daemon global 0/1 git.example.com:8444/devops/docker-services/icecc-daemon:latest
ofpsosrhrzoq build_farm_icecc-scheduler replicated 0/1 git.example.com:8444/devops/docker-services/icecc-scheduler:latest
b9llhoe97vwz build_farm_jenkins replicated 0/1 git.example.com:8444/devops/docker-services/jenkins:latest *:3268->3268/tcp, *:8080->8080/tcp, *:8443->8443/tcp, *:50000->50000/tcp
Which seems to mean none of the services actually started, I can't access any of them which seems to confirm this.
The second issue is that the icecc-daemon container only has one replica despite being started in global mode with 2 nodes on the swarm
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
rc6aajdnwnis4dvn4um7qcwk9 ex12873 Ready Active 20.10.12
phnvvy2139wft9innou0uermq * ex12874 Ready Active Leader 20.10.12

How can I connect a container with another container on the same node via docker swarm

Hey Im trying to create a docker swarm with an compose File
the service "timeservice" connects via
tcp://localhost:61616
to ActiveMQ
without docker-swarm I got running with the following compose-file:
version: "3.3"
services:
ActiveMQ:
container_name: ActiveMQ
image: rmohr/activemq
restart: always
ports:
- "61616:61616"
- "8161:8161"
networks:
TutoNetz:
ipv4_address: 172.20.0.2
Postgres:
container_name: Postgres
image: postgres
restart: always
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
ports:
- "5432:5432"
networks:
TutoNetz:
ipv4_address: 172.20.0.3
HelloJexxa:
container_name: HelloJexxa
image: 77f9ab0ef7d2
restart: always
ports:
- "7000:7000"
networks:
TutoNetz:
ipv4_address: 172.20.0.4
TimeService:
container_name: TimeService
image: 93c0aebd3f31
restart: always
ports:
- "7001:7000"
networks:
TutoNetz:
ipv4_address: 172.20.0.5
networks:
TutoNetz:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
But how do I get this running in an docker-swarm environment? how do I create such a network? Everything on the same node (master)
Here the compose-file for docker swarm:
version: "3.8"
services:
ActiveMQ:
container_name: ActiveMQ-Swarm
image: rmohr/activemq
deploy:
replicas: 1
ports:
- "62626:61616"
- "8262:8161"
HelloJexxa:
container_name: HelloJexxa-Swarm
image: ni920/hellojexxa:latest
deploy:
replicas: 3
ports:
- "8001:7001"
Timeservice:
container_name: Timeservice-Swarm
image: ni920/timeserviceplain:latest
deploy:
replicas: 3
ports:
- "7000:7000"
visualizer:
container_name: SwarmVisualizer
image: dockersamples/visualizer
deploy:
placement:
constraints: [node.role == manager]
ports:
- 5000:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
I hope you can help me.
You can deploy the stack from a compose file on the swarm using below command:
docker stack deploy --compose-file docker-compose.yml [stack_name]
Refer to the docs

docker stack: Redis not working on worker node

I just completed the docker documentation and created two instances on aws (http://13.127.150.218, http://13.235.134.73). The first one is manager and the second one is the worker. Following is the composed file I used to deploy
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
Here the redis service has the constraint that restricts it to run only on manager node. Now my question is how the web service on worker instance is supposed to use the redis service.
You need to use the hostname parameter in all container, so you can use this value to access services from worker or to access from worker the services on manager.
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: username/repo:tag
hostname: "web"
deploy:
replicas: 5
restart_policy:
condition: on-failure
resources:
limits:
cpus: "0.1"
memory: 50M
ports:
- "80:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
hostname: "visualizer"
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
hostname: "redis"
ports:
- "6379:6379"
volumes:
- "/home/docker/data:/data"
deploy:
placement:
constraints: [node.role == manager]
command: redis-server --appendonly yes
networks:
- webnet
networks:
webnet:
In addictional if you use the portainer instead of visualizer you can control you SWARM stack with more options:
https://hub.docker.com/r/portainer/portainer
BR,
Carlos
Consider the stack file as per the below example -
Regardless of where it is placed manager|worker all the services in the stack file being on the same network can use the embedded DNS functionality which helps to resolve each service by the service name defined.
In this case the service web makes use of service redis by its service name.
Here is an example of the ping command able to resolve the service web from within the container associated with the redis service -
Read more about the Swarm Native Service Discovery to understand this.

Resources