Not deploy container on master node in Docker Swarm - docker

I am working on a project which uses Raspberry Pis as worker nodes and my laptop as the master node. I hope to control the deployment of my containers from my laptop, but I hope the containers run on the worker nodes only(which means no container on the master node). How can I do it with Docker Swarm?

I am going to presume you are using a stack.yml file to describe your deployment using desired-state, but docker service create does have flags for this too.
There are a number of values that docker defines that can be tested under a placement-constraints node:
version: "3.9"
service:
worker:
image: nginx
deploy:
placement:
constraints:
- node.role==worker

Related

Docker swarm run on multible machines

I deploy a docker swarm on 5 nodes and I have 5 microservices. The docker swarm assigns the services only in one node. Is there is any way to tell to docker swarm which node to use for every service in order to assign 1 service in every node.
Yes you can do this with the "deploy" configuration option in your compose file. For example:
deploy:
placement:
constraints:
- "node.hostname == desired_machine_hostname"

Statically configure a microservice to run on a specific machine

I created a 4 micro-services using the Moleculer framework with docker-compose. How do I statically configure each micro-service to run on a specific machine.
You may want to use docker swarm which has a feature allows you to deploy a container on a specific node which called Constraints
Node: A docker node refers to a member in a swarm mode cluster. Every swarm node must be a docker host, Source: What is the difference between docker host and node?
Constraints can be treated as node tags, They are key/value pairs associated to particular node.
Each node by default has the following constraints:
node.id
node.hostname
node.role
A service can be deployed as the following:
docker service create --name backendapp --constraint 'node.hostname == web.example.com'
Note that you can deploy to swarm using docker-compose.yml:
The deploy command supports compose file version 3.0 and above.
docker stack deploy --compose-file docker-compose.yml mystack
Also you can set constraints in docker-compose similar to the following example:
version: '3.3'
services:
web:
image: backendapp-image
deploy:
placement:
constraints:
- node.hostname == web.example.com
You can get start with docker swarm through here

Website available in standalone container, not in swarm

I have Docker CE running on windows server 2016 with 2 images.
when I run these in containers, everything is fine.
docker run --detach --name Website1 --publish 94:94 webimage1
docker run --detach --name Website2 --publish 95:95 webimage2
I can access through browser on other PCs:
http://host:94/page1.aspx
http://host:95/page1.aspx
Now I want to run them in swarm.
I've gone through docker tutorial and 've set up docker-compose file, with services, port mapping. The setup has Website 1 of 1 replica, Website 2 of 2 replicas.
On docker stack services websites port numbers show up as follows.
Website1: *:94->94/tcp
Website2: *:95->95/tcp
but i can't access any of them with following url's:
http://host:94/page1.aspx
http://host:95/page1.aspx
I get - This site can't be reached
If I go back to one of my running containers, I see that the port number has a different format.
0.0.0.0:94->94/tcp (WORKING) VS *:94->94/tcp (NOT WORKING)
For initilizing docker swarm I used docker swarm init with IP address of the host on port 2377.
Here is how I deployed docker stack using compose file
docker stack deploy --compose-file docker-stack.yml websites
docker-stack.yml file for reference.
version: "3"
services:
website1:
image: website1:latest
ports:
- 94:94
depends_on:
- website2
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
website2:
image: website2:latest
ports:
- 95:95
deploy:
mode: replicated
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
any guidance would be greatly appreciated.
many thanks
From the conversations in the comments the problem is with the stack.
Docker stack supports only images for deployment and replicas. I believe your images are custom ones which are neither in dockerhub nor in any private repo. So when you try to deploy it in stack the service which is getting deployed in the worker node doesn't find such image and is unable to download it from repo. Hence the service won't start in worker node. It works perfectly in manager node because the image already exists there.
So, you have to either set up a local/private registry or push the images to docker registry or else you can even copy the images from manager node to worker node using docker save and docker load and then try using swarm and deploy in stack it will work.
Please note when working with swarm and registries, While deploying stack using docker stack deploy -c composefile.yml test you have to pass --with-registry-auth if you are using authentication for registries as docker stack deploy -c composefile.yml test --with-registry-auth else other nodes may not authenticate with the registry which will result in failure to download images if not found.
Also please note if you set up a local private repo without self signed certificate or with self signed certificate you may need to configure insecure registry. I've given reference of the same.
I recommend setting up a local repo without any authentication and certificates and access it by adding insecure registry in daemon.json file for testing purposes.
Now as per the last comment where you removed swarm and tried running docker service using
docker service create --replicas 2 --name contentlinksapi --publish mode=host,target=94,published=94,protocol=tcp contentlinksapi
It throwed port already in use because it tries to create 2 replicas in the same machine. Where the first replica binds to port 94 because of which, The second replica throws port already in use error.
For your reference.
Deploy a registry server
Test an insecure registry
Docker service mode (check to know why services with two replicas deployed in same host on docker service create)
Docker save
Docker load

Will my windows container work on linux worker node in swarm?

Appreciate an expert advise. We have a Docker EE setup on RH Linux platform.
Given that we have setup Docker EE as:
2 manager nodes (linux)
2 worker nodes (linux)
2 worker node (windows server)
UCP
Docker Swarm
When I build a windows container to run a .NET console service built on .NET 4.6.2. How this container gets allocated in the swarm?
Questions:
How will this be able to join the swarm?
Will my container be able to run on the worker nodes running Linux host OS?
How docker swarm manage the fail-over of the nodes? Will the replica only gets distributed on the windows worker nodes? Is this setup of ours make sense?
I had some readings that windows containers only runs on Windows host but Linux containers can run both Linux and Windows host nodes. Will be testing this this week but would be great to hear your experiences. //TIA
You join your windows container hosts to swarm the same way you join UNIX ones (docker swarm join). You assign label to those nodes to identify that those are windows nodes and when you deploy service specify constraint for windows containers.
It will work as you would expect with UNIX services. Current limitation is that you can only deploy in global mode, that is you have to have windows nodes running on each node since swarm mesh is not fully supported yet.
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/swarm-mode
You no longer need to create OS labels for each node. Docker Swarm recognizes worker node OS automatically. Just specify the desired OS for each service in your compose file:
version: '3'
services:
service_1:
restart: on-failure
image: 'service_1'
deploy:
placement:
constraints:
- node.platform.os == windows
junittestsuite:
restart: on-failure
image: 'junit_test_suite:1.0'
command: ant test ...
deploy:
placement:
constraints:
- node.platform.os == linux

How to fetch Ips of a service in docker swarm cluster ?

I am running a docker swarm mode cluster with 2 nodes, and deploy 5 services : [ mysql , mongo , app ] and wish to filldb with an ansible script from my manager node. But I can not get the Ip from nodes to access db services in container ?
e.g:
mysql -h {{ mysql_service_host }} ....
how to get the container Ip or the service ip from node ?
is it possible to use mode host in docker swarm ?
For services (containers) that are part of the same network you can simply use the service name. Docker includes a DNS resolver that handles ip resolution. You will need to make your services part of an overlay network. An overlay network can span more than one node.
Eg:
services:
myapp:
image: myimage:1.0
deploy:
replicas: 1
networks:
- privnet
maindb:
image: mysql
deploy:
replicas: 1
networks:
- privnet
networks:
privnet:
driver: overlay
This creates an overlay network with two services. The corresponding containers could be created on any node. It doesn't matter where. They will all be able to communicate to each other since they're part of the same overlay network.
Within myapp, you can use maindb as a DNS for the mysql service. It will be resolved by Docker to the proper ip within the privnet network.
btw, a swarm cluster with 2 nodes doesn't make much sense. Swarm requires a minimum of 3 nodes for the Raft consensus protocol to work. https://raft.github.io

Resources