Docker Swarm Deploy ignores placement constraints - docker

So I have an issue with compose version 3.6 and deploying swarm services. What I'm trying to achieve is to only deploy either service A or B depending on the label value applied to the node.
What's happening is both services are getting deployed (or one is and the second one fails due to port usage).
serviceA:
image: serviceA:v1
deploy:
mode: global
restart_policy:
condition: on-failure
placement:
constraints:
- node.labels.faces == cpu
networks:
- mynet
ports:
- "8888:8888"
serviceB:
image: serviceB:v1
deploy:
mode: global
restart_policy:
condition: on-failure
placement:
constraints:
- node.labels.faces == gpu
networks:
- mynet
ports:
- "8888:8888"
On my single node I have defined a label as follows
# docker node inspect swarm-manager --pretty
ID: 0cpco8658ap5xxvxxblpqggpq
Labels:
- faces=cpu
Hostname: swarm-manager
Is this configuration even possible? Only the service which has a matching label should be instantiated.
I want to use global instead of replicated because we add additional nodes automatically without going to the master, but I read in another forum that the two might not be compatible.
However, if I create everything manually using CLI it works as expected
docker node update --label-add faces=cpu swarm-manager
docker service create -d --name serviceA --constraint node.labels.faces==cpu -p 8888 --mode global serviceA:v1
docker service create -d --name serviceB --constraint node.labels.faces==gpu -p 8888 --mode global serviceB:v1
# docker service ls | grep service
c30y50ez605p serviceA global 1/1 service:v1 *:30009->8888/tcp
uxjw41v42vzh serviceB global 0/0 serviceB:v1 *:30010->8888/tcp
You can see that the service created with CPU constraint worked and the service with GPU was not instantiated (in pending state).

Related

Disable external node service accessibility in docker swarm

I have a docker swarm with 2 nodes and each node run 2 services in global mode so each node have 2 services running inside it. My problem is how to force ubuntu service in node1 only connect to mysql service in node1 and dont use round-robin method to select mysql service.
so when I connect to mysql from ubuntu in node1 with mysql -hmysql -uroot -p it select only mysql in node1.
here is the docker-compose file which describes my case
version: '3.8'
services:
mysql:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
networks:
app-net: {}
deploy:
mode: global
ubuntu:
entrypoint: tail -f /dev/null
deploy:
mode: global
image: ubuntu:20.04
networks:
app-net: {}
networks:
app-net: {}
with this docker-compose file inside ubuntu container when I try to connect to mysql it selects mysql service in both nodes with round-robin algorithm.
What I try to achieve is to force each service be only visible to the services inside the same node.
I can't think of an easy way to achieve what you want in swarm with an overlay network. However, you can use unix socket instead of network. Just create a volume, mount it both into MySQL and your application, then make MySQL to put its socket onto that volume. Docker will create a volume on each node and thus you'll have your communication closed within node.
If you insist on using network communications, you can mount node's Docker socket into your app container and use it to find name of the container running MySQL on that node. Once you got the name, you can use it to connect to the particular instance of the service. Now, not only it is hard to make, it is also an anti-pattern and a security threat, so I don't recommend you to implement this idea.
At last there is also Kubernetes, where containers inside a pod can communicate with each other via localhost but I think you won't go that far, will you?
You should have a look mode=host.
You can bypass the routing mesh, so that when you access the bound port on a given node, you are always accessing the instance of the service running on that node. This is referred to as host mode. There are a few things to keep in mind.
ports:
- target: 80
published : 8080
protocol: tcp
mode: host
Unless I'm missing something, I would say you should not use global deploy and instead declare 2 ubuntu service and 2 mysql services in the compose file or deploy 2 separate stacks and in both cases use constraints to pin containers to specific node.
Example for first case would be something like this:
version: '3.8'
services:
mysql1:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node1]
mysql2:
image: mysql:5.7
environment:
MYSQL_ROOT_PASSWORD: password
deploy:
placement:
constraints: [node.hostname == node2]
ubuntu1:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node1]
ubuntu2:
entrypoint: tail -f /dev/null
image: ubuntu:20:04
deploy:
placement:
constraints: [node.hostname == node2]

Docker swarm does not distribute the container in the cluster

I have a two servers to use in a Docker cluster Swarm(test only), one is a Manager and other is a Worker, but running the command docker stack deploy --compose-file docker-compose.yml teste2 all the services is run in the manager and the worker not receive the containers to run, for some reason the Swarm is not achieving distributing the services in the cluster and running all in manager server.
Will my docker-compose.yml be causing the problem or might it be a network problem?
Here are some settings:
Servers CentOs 7, Docker version 18.09.4;
I executed the commands systemctl stop firewalld && systemctl disable firewalld to disable firewall;
I executed the command docker swarm join --token ... in the worker;
Result docker node ls:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
993dko0vu6vlxjc0pyecrjeh0 * name.server.manager Ready Active Leader 18.09.4
2fn36s94wjnu3nei75ymeuitr name.server.worker Ready Active 18.09.4
File docker-compose.yml:
version: "3"
services:
web:
image: testehello
deploy:
replicas: 5
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
# placement:
# constraints: [node.role == worker]
ports:
- 4000:80
networks:
- webnet
visualizer:
image: dockersamples/visualizer:stable
ports:
- 8080:8080
stop_grace_period: 1m30s
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
deploy:
placement:
constraints: [node.role == manager]
networks:
webnet:
I executed the command docker stack deploy --compose-file docker-compose.yml teste2
In the docker-compose.yml I commented the parameters placement and constraints because they did not work and did not start the containers on the servers, without it the containers are started in the manager. Through the Visualizer all appear in the manager.
I think the images are not accessible from a worker node, that is why they not receive containers, try to use this guide by docker https://docs.docker.com/engine/swarm/stack-deploy/
P.S. I think you solved it already, but just in case.

How to use Docker Swarm Mode to share data between containers?

I have just started using docker . I was able to create a docker compose file which deploys three components of my application ,with the necessary number of replications in one host .
I want to replicate the same same thing ,with multiple hosts now .
I have three processes A[7 copies ] ,B [ 1 copy] ,C [1 Copy]
I followed the creating swarm tutorial on the docker website ,and managed to create a manager and attach two workers to it .
So now when I run my command
docker stack deploy --compose-file docker-compose.yml perf
It does spawn the required number of machines ,but all of them in the manager itself .
I would ideally want them to spawn C and B in the manager and ann the copies of A distributed between worker 1 and worker 2.
Here is my docker -compose file
version: '3'
services:
A:
image: A:host
tty: true
volumes:
- LogFilesLocationFolder:/jmeter/log
- AntOutLogFolder:/antout
- ZipFilesLocationFolder:/zip
deploy:
replicas: 7
placement:
constraints: [node.role == worker]
networks:
- perfhost
B:
container_name: s1_perfSqlDB
restart: always
tty: true
image: mysql:5.5
environment:
MYSQL_ROOT_PASSWORD: ''
volumes:
- mysql:/var/lib/mysql
ports:
- "3306:3306"
deploy:
placement:
constraints: [node.role == manager]
networks:
- perfhost
C:
container_name: s1_scheduler
image: C:host
tty: true
volumes:
- LogFilesLocationFolder:/log
- ZipFilesLocationFolder:/zip
- AntOutLogFolder:/antout
networks:
- perfhost
deploy:
placement:
constraints: [node.role == manager]
ports:
- "7000:7000"
networks:
perfhost:
volumes:
mysql:
LogFilesLocationFolder:
ZipFilesLocationFolder:
AntOutLogFolder:
B) And if I do get this working ,how do I use volumes to transfer data between Conatiner for Service A and container for Service B ,given that they are on different host machines
A few tips and answers:
for service names I don't recommend capital letters. Use valid DNS hostnames (lowercase, no special char except -).
container_name isn't supported in swarm and shouldn't be needed. Looks like C: should be something like scheduler, etc. Make the service names simple so they are easy to use/remember on their virtual network.
All services in a single compose file are always on the same docker network in swarm (and docker-compose for local development), so no need for the network assignment or listing.
restart:always isn't needed in swarm. That setting isn't used and is the default anyways. If you're using it for docker-compose, it's rarely needed as you usually don't want apps in a respawn loop during errors which will usually result in CPU race condition. I recommend leaving it off.
Volumes use a "volume driver". The default is local, just like normal docker commands. If you have shared storage you can use a volume driver plugin from store.docker.com to ensure shared storage is connected to the correct node.
If you're still having issues with worker/manager task assignment, put the output of docker node ls and maybe docker service ls and docker node ps <managername> for us to help troubleshoot.
First you should run
docker node ls
And check if all of your nodes are available. If they are, you should check if the workers have the images they need to run the containers.
I would also try with a constraint using the id of each node instead, you can see the ids with the previous command.
Run before docker stack deploy:
mkdir /srv/service/public
docker run --rm -v /srv/service/public:/srv/service/public my-container-with-data cp -R /var/www/app/public /srv/service/public
Use direcory /srv/service/public as volume in containers.

Redis cluster with docker swarm using docker compose

I'm just learning docker and all of its goodness like swarm and compose. My intention is to create a Redis cluster in docker swarm.
Here is my compose file -
version: '3'
services:
redis:
image: redis:alpine
command: ["redis-server","--appendonly yes","--cluster-enabled yes","--cluster-node-timeout 60000","--cluster-require-full-coverage no"]
deploy:
replicas: 5
restart_policy:
condition: on-failure
ports:
- 6379:6379
- 16379:16379
networks:
host:
external: true
If I add the network: - host then none of the containers start, if I remove it then the containers start but when I try to connect it throws an error like CLUSTERDOWN Hash slot not served.
Specs -
Windows 10
Docker Swarm Nodes -
2 Virtual Box VMs running Alpine Linux 3.7.0 with two networks
VirtualBox VM Network -
eth0 - NAT
eth1 - VirtualBox Host-only network
Docker running inside the above VMs -
17.12.1-ce
This seems to work for me, network config from here :
version: '3.6'
services:
redis:
image: redis:5.0.3
command:
- "redis-server"
- "--cluster-enabled yes"
- "--cluster-config-file nodes.conf"
- "--cluster-node-timeout 5000"
- "--appendonly yes"
deploy:
mode: global
restart_policy:
condition: on-failure
networks:
hostnet: {}
networks:
hostnet:
external: true
name: host
Then run for example: echo yes | docker run -i --rm --entrypoint redis-cli redis:5.0.3 --cluster create 1.2.3.4{1,2,3}:6379 --cluster-replicas 0
Replace your IPs obviously.
For anyone struggling with this unfortunately this can't be done via docker-compose.yml yet. Refer to this issue Start Redis cluster #79. The only way to do this is by getting the IP address and ports of all the nodes that are running Redis and then running this command in any of the swarm nodes.
# Gives you all the command help
docker run --rm -it thesobercoder/redis-trib
# This creates all master nodes
docker run --rm -it thesobercoder/redis-trib create 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000
# This creates slaves nodes. Note that this requires at least six nodes running master
docker run --rm -it thesobercoder/redis-trib create --replicas 1 172.17.8.101:7000 172.17.8.102:7000 172.17.8.103:7000 172.17.8.104:7000 172.17.8.105:7000 172.17.8.106:7000
here is repo for redis cluster
https://github.com/jay-johnson/docker-redis-cluster/blob/master/docker-compose.yml

Use docker-compose with docker swarm

I'm using docker 1.12.1
I have an easy docker-compose script.
version: '2'
services:
jenkins-slave:
build: ./slave
image: jenkins-slave:1.0
restart: always
ports:
- "22"
environment:
- "constraint:NODE==master1"
jenkins-master:
image: jenkins:2.7.1
container_name: jenkins-master
restart: always
ports:
- "8080:8080"
- "50000"
environment:
- "constraint:NODE==node1"
I run this script with docker-compose -p jenkins up -d.
This Creates my 2 containers but only on my master (from where I execute my command). I would expect that one would be created on the master and one on the node.
I also tried to add
networks:
jenkins_swarm:
driver: overlay
and
networks:
- jenkins_swarm
After every service but this is failing with:
Cannot create container for service jenkins-master: network jenkins_jenkins_swarm not found
While the network is created when I perform docker network ls
Someone who can help me to deploy 2 containers on my 2 nodes with docker-compose. Swarm is defenitly working on my "cluster". I followed this tutorial to verify.
Compose doesn't support Swarm Mode at the moment.
When you run docker compose up on the master node, Compose issues docker run commands for the services in the Compose file, rather than docker service create - which is why the containers all run on the master. See this answer for options.
On the second point, networks are scoped in 1.12. If you inspect your network you'll find it's been created at swarm-level, but Compose is running engine-level containers which can't see the swarm network.
We can do this with docker compose v3 now.
https://docs.docker.com/engine/swarm/#feature-highlights
https://docs.docker.com/compose/compose-file/
You have to initialize the swarm cluster using command
$ docker swarm init
You can add more nodes as worker or manager -
https://docs.docker.com/engine/swarm/join-nodes/
Once you have your both nodes added to the cluster, pass your compose v3 i.e deployment file to create a stack. Compose file should just contain predefined images, you can't give a Dockerfile for deployment in Swarm mode.
$ docker stack deploy -c dev-compose-deploy.yml --with-registry-auth PL
View your stack services status -
$ docker stack services PL
Try to use Labels & Placement constraints to put services on different nodes.
Example "dev-compose-deploy.yml" file for your reference
version: "3"
services:
nginx:
image: nexus.example.com/pl/nginx-dev:latest
extra_hosts:
- "dev-pldocker-01:10.2.0.42”
- "int-pldocker-01:10.2.100.62”
- "prd-plwebassets-01:10.2.0.62”
ports:
- "80:8003"
- "443:443"
volumes:
- logs:/app/out/
networks:
- pl
deploy:
replicas: 3
labels:
feature.description: “Frontend”
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
placement:
constraints: [node.role == worker]
command: "/usr/sbin/nginx"
viz:
image: dockersamples/visualizer
ports:
- "8085:8080"
networks:
- pl
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
deploy:
replicas: 1
labels:
feature.description: "Visualizer"
restart_policy:
condition: any
placement:
constraints: [node.role == manager]
networks:
pl:
volumes:
logs:

Resources