how to set the service mode when using docker compose? - docker

I need to set service mode to global while using compose files .
Any chance we can use this in compose file ?
I have a requirement where for a service there should be exactly one container on every node/host .
This doesn't happen with "spread strategy" of swarm if a node goes down & comes up , it just attains the equal number of containers on each host irrespective of services .
https://github.com/docker/compose/issues/3743

We can do this easily now with docker compose v3 (version 3) under the deploy(mode) section.
Prerequisites -
docker compose version should be 1.10.0+
docker engine version should be 1.13.0+
Example compose file -
version: "3"
services:
nginx:
image: nexus3.example.com/prd-nginx-sm:v1
ports:
- "80:80"
networks:
- cheers
volumes:
- logs:/rest/out/
deploy:
mode: global
labels:
feature.description: "Frontend"
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
command: "/usr/sbin/nginx"
networks:
cheers:
volumes:
logs:
data:
Deploy the compose file -
$ docker stack deploy -c sm-deploy-compose.yml --with-registry-auth CHEERS
This will deploy nginx container on all the nodes participating in the cluster .

Related

Docker: Additional property pull_policy is not allowed

Hi guys and excuse me for my English. I'm using docker swarm, when I attempt to deploy docker application with this command
docker stack deploy -c docker-compose.yml -c docker-compose.prod.yml chatappapi
it shows the next error : services.chat-app-api Additional property pull_policy is not allowed
why this happens?
how do I solve this?
docker-compose.yml
version: "3.9"
services:
nginx:
image: nginx:stable-alpine
ports:
- "5000:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
chat-app-api:
build: .
image: username/myapp
pull_policy: always
volumes:
- ./:/app
- /app/node_modules
environment:
- PORT= 5000
- MAIL_USERNAME=${MAIL_USERNAME}
- MAIL_PASSWORD=${MAIL_PASSWORD}
- CLIENT_ID=${CLIENT_ID}
- CLIENT_SECRET=${CLIENT_SECRET}
- REDIRECT_URI=${REDIRECT_URI}
- REFRESH_TOKEN=${REFRESH_TOKEN}
depends_on:
- mongo-db
mongo-db:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: 'username'
MONGO_INITDB_ROOT_PASSWORD: 'password'
ports:
- "27017:27017"
volumes:
- mongo-db:/data/db
volumes:
mongo-db:
docker-compose.prod.yml
version: "3.9"
services:
nginx:
ports:
- "80:80"
chat-app-api:
deploy:
mode: replicated
replicas: 8
restart_policy:
condition: any
update_config:
parallelism: 2
delay: 15s
build:
context: .
args:
NODE_ENV: production
environment:
- NODE_ENV=production
- MONGO_USER=${MONGO_USER}
- MONGO_PASSWORD=${MONGO_PASSWORD}
- MONGO_IP=${MONGO_IP}
command: node index.js
mongo-db:
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_PASSWORD}
Information
docker-compose version 1.29.2
Docker version 20.10.8
Ubuntu 20.04.2 LTS
Thanks in advance.
Your problem line is in docker-compose.yml
chat-app-api:
build: .
image: username/myapp
pull_policy: always # <== this is the bad line, delete it
The docker compose file reference doesn't have any pull_policy in the api because
If the image does not exist, Compose attempts to pull it, unless you have also specified build, in which case it builds it using the specified options and tags it with the specified tag.
I think pull_policy used to be a thing for compose? Maybe keep the latest api documentation open to refer to/search through whilst you're developing (things can and do change fairly frequently with compose).
If you want to ensure that the most recent version of an image is pulled onto all servers in a swarm then run docker compose -f ./docker-compose.yml pull on each server in turn (docker stack doesn't have functionality to run this over an entire swarm yet).
As an aside: I wouldn't combine two .yml files with a single docker stack command without a very good reason to do so.
You are mixing docker-compose and docker swarm ideas up in the same files:
It is probably worth breaking your project up into 3 files:
docker-compose.yml
This would contain just the basic service definitions common to both compose and swarm.
docker-compose.override.yml
Conveniently, docker-compose and docker compose both should read this file automatically. This file should contain any "port:", "depends_on:", "build:" directives, and any convenience volumes use for development.
stack.production.yml
The override file to be used in stack deployments should contain everything understood by swarm and not compose, and b. everything required for production.
Here you would use configs: or even secrets: rather than volume mappings to local folders to inject content into containers. Rather than relying on ports: directives, you would install an ingress router on the swarm such as traefik. and so on.
With this arrangement, docker compose can be used to develop and build your compose stack locally, and docker stack deploy won't have to be exposed to compose syntax it doesn't understand.
pull_policy is in the latest version of docker-compose.
To upgrade your docker-compose refer to:
https://docs.docker.com/compose/install/
The spec for more info:
https://github.com/compose-spec/compose-spec/blob/master/spec.md#pull_policy

Docker Swarm with springboot app

I'm currently trying to deploy an application with docker swarm in 3 virtual machines, I'm doing it through docker-compose to create the image, my files are the following:
Dockerfile:
FROM openjdk:8-jdk-alpine
WORKDIR /home
ARG JAR_FILE
ARG PORT
VOLUME /tmp
COPY ${JAR_FILE} /home/app.jar
EXPOSE ${PORT}
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/home/app.jar"]
and my docker-compose is:
version: '3'
services:
service_colpensiones:
build:
context: ./colpensiones-servicio
dockerfile: Dockerfile
args:
JAR_FILE: ColpensionesServicio.jar
PORT: 8082
volumes:
- data:/home
ports:
- 8082:8082
volumes:
data:
I'm using the command docker-compose up -d --build to build the image, I automatically create the container which is deleted later. To use docker swarm I use the 3 machines, one manager and two worker, I have another file to deploy the service with 3 replicas
version: '3'
services:
service_colpensiones:
image: deploy_lyra_colpensiones_service_colpensiones
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
volumes:
- data:/home
ports:
- 8082:8082
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
networks:
webnet:
volumes:
data:
So far I think everything is fine because in the console with the command: docker service ls I see the services created, the viewer dockersamples / visualizer: stable, shows me the nodes correctly on port 8080, but when I want to make a request to the url of the services that is in the following way:
curl -4 http://192.168.99.100:8082/colpensiones/msg
the error appears:
curl: (7) Failed to connect to 192.168.99.100 port 8082: Refused connection.
The images from service are:
I am following the docker tutorial: Get Started https://docs.docker.com/get-started/part5/
I hope your help, thanks
I had the same issue but fixed after changing the port number of the spring boot service to
ports:
- "8082:8080"
The actual issue is: tomcat server by default listening on port 8080 not the port mentioned on the compose file. Also i increased the memory limit.
FYI: The internal port of the tasks/container running in the service can be same for other containers as well(:) so mentioning 8080(internal port) for both spring boot container and visualizer container is not a problem.
I also faced the same issue for my application. I rebuilt my app by removing from Dockerfile => -Djava.security.egd=file:/dev/./urandom java cmdline property, and it started working for me.
Please check "docker service logs #containerid#" (to see container ids run command "docker stack ps #servicename#") which served you request at that time, and see if you see any error message.
PS: I recently started on docker, so might not be an expert advice. Just in case if it helps.

Docker: Option for automatic redeployment in docker-compose.yml for new image pushed

docker-compose.yml
version: "3"
services:
daggr:
image: "docker.pvt.com/test/daggr:stable"
hostname: '{{.Node.Hostname}}'
deploy:
mode: global
resources:
limits:
cpus: "2"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- webnet
redis:
image: redis
networks:
- webnet
networks:
webnet:
Right now, I am deploying docker containers with the below command:
docker stack deploy -c docker-compose.yml daggrstack
Is there way to specify in docker-compose.yml file that if the image, docker.pvt.com/test/daggr:stable, is updated (ie. docker build, docker tag, and docker push :stable), then the running containers are automatically re-deployed with the updated image?
So, i dont have to re-run docker stack deploy every time i pushed a new docker image
Is there way to specify in docker-compose.yml file that if the image, docker.pvt.com/test/daggr:stable,
is updated (ie. docker build, docker tag, and docker push :stable), then the running containers are automatically re-deployed with the updated image?
The answer is No. Docker swarm does not auto-update the service when a new image is available. This should handled as part a continuous deployment system.
However, Docker does make it easy to update the images of already running services.
As described in Apply rolling updates to a service, you can update the image of a services, via:
docker service update --image docker.pvt.com/test/daggr:stable daggr

privileged mode in docker compose in a swarm

I am using docker-compose.yml to deploy services in a docker swarm which has cluster of raspberry pis. My services require access to the raspberry pi GPIO and needs privileged mode. I am using docker version 18.02 with docker-compose version 3.6. When I deploy the stack, I receive the following message and the services do not get deployed: "Ignoring unsupported options: privileged". Any tips? Below is my docker-compose.yml file
version: '3.6'
networks:
swarm_network:
driver: overlay
services:
service1:
image: localrepo/img1:v0.1
privileged: true
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.hostname == home-desktop
ports:
- published: 8000
target: 8000
mode: host
networks:
swarm_network:
service2:
image: localrepo/img1:v0.1
privileged: true
deploy:
mode: replicated
replicas: 1
ports:
- published: 7000
target: 7000
mode: host
networks:
swarm_network:
nodeViewer:
image: alexellis2/visualizer-arm:latest
ports:
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
- swarm_network
Thats because privileged is not supported in docker swarm. I had a similar docker compose running in privileged mode but while using it to docker swarm I removed them and was working well.
That not exactly an error .For example if you use something like links or depends_on . You get similar warning message. These are just the warnings not errors.
This is how you actually check the error logs if there is any
docker service ls (to check running service)
docker service logs servicename
Whole feature is implemented and works as far I can see so who ever want to test it can do it by downloading latest nightly build of Docker engine (dockerd) from https://master.dockerproject.org and the custom build version of Docker CLI from https://github.com/olljanat/cli/releases/tag/beta1
You can also find usage examples for CLI from docker/cli#2199 and for Stack from docker/cli#1940 If you find bugs from those please leave comment to correct PR. Also notice that syntax might still change during review.
Source: https://github.com/moby/moby/issues/25885#issuecomment-557790402
I've personally tested it and it works like a charm. thanks to the author.

Use docker-compose with docker swarm

I'm using docker 1.12.1
I have an easy docker-compose script.
version: '2'
services:
jenkins-slave:
build: ./slave
image: jenkins-slave:1.0
restart: always
ports:
- "22"
environment:
- "constraint:NODE==master1"
jenkins-master:
image: jenkins:2.7.1
container_name: jenkins-master
restart: always
ports:
- "8080:8080"
- "50000"
environment:
- "constraint:NODE==node1"
I run this script with docker-compose -p jenkins up -d.
This Creates my 2 containers but only on my master (from where I execute my command). I would expect that one would be created on the master and one on the node.
I also tried to add
networks:
jenkins_swarm:
driver: overlay
and
networks:
- jenkins_swarm
After every service but this is failing with:
Cannot create container for service jenkins-master: network jenkins_jenkins_swarm not found
While the network is created when I perform docker network ls
Someone who can help me to deploy 2 containers on my 2 nodes with docker-compose. Swarm is defenitly working on my "cluster". I followed this tutorial to verify.
Compose doesn't support Swarm Mode at the moment.
When you run docker compose up on the master node, Compose issues docker run commands for the services in the Compose file, rather than docker service create - which is why the containers all run on the master. See this answer for options.
On the second point, networks are scoped in 1.12. If you inspect your network you'll find it's been created at swarm-level, but Compose is running engine-level containers which can't see the swarm network.
We can do this with docker compose v3 now.
https://docs.docker.com/engine/swarm/#feature-highlights
https://docs.docker.com/compose/compose-file/
You have to initialize the swarm cluster using command
$ docker swarm init
You can add more nodes as worker or manager -
https://docs.docker.com/engine/swarm/join-nodes/
Once you have your both nodes added to the cluster, pass your compose v3 i.e deployment file to create a stack. Compose file should just contain predefined images, you can't give a Dockerfile for deployment in Swarm mode.
$ docker stack deploy -c dev-compose-deploy.yml --with-registry-auth PL
View your stack services status -
$ docker stack services PL
Try to use Labels & Placement constraints to put services on different nodes.
Example "dev-compose-deploy.yml" file for your reference
version: "3"
services:
nginx:
image: nexus.example.com/pl/nginx-dev:latest
extra_hosts:
- "dev-pldocker-01:10.2.0.42”
- "int-pldocker-01:10.2.100.62”
- "prd-plwebassets-01:10.2.0.62”
ports:
- "80:8003"
- "443:443"
volumes:
- logs:/app/out/
networks:
- pl
deploy:
replicas: 3
labels:
feature.description: “Frontend”
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
placement:
constraints: [node.role == worker]
command: "/usr/sbin/nginx"
viz:
image: dockersamples/visualizer
ports:
- "8085:8080"
networks:
- pl
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
replicas: 1
labels:
feature.description: "Visualizer"
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
pl:
volumes:
logs:

Resources