Statically configure a microservice to run on a specific machine - docker

I created a 4 micro-services using the Moleculer framework with docker-compose. How do I statically configure each micro-service to run on a specific machine.

You may want to use docker swarm which has a feature allows you to deploy a container on a specific node which called Constraints
Node: A docker node refers to a member in a swarm mode cluster. Every swarm node must be a docker host, Source: What is the difference between docker host and node?
Constraints can be treated as node tags, They are key/value pairs associated to particular node.
Each node by default has the following constraints:
node.id
node.hostname
node.role
A service can be deployed as the following:
docker service create --name backendapp --constraint 'node.hostname == web.example.com'
Note that you can deploy to swarm using docker-compose.yml:
The deploy command supports compose file version 3.0 and above.
docker stack deploy --compose-file docker-compose.yml mystack
Also you can set constraints in docker-compose similar to the following example:
version: '3.3'
services:
web:
image: backendapp-image
deploy:
placement:
constraints:
- node.hostname == web.example.com
You can get start with docker swarm through here

Related

Docker swarm run on multible machines

I deploy a docker swarm on 5 nodes and I have 5 microservices. The docker swarm assigns the services only in one node. Is there is any way to tell to docker swarm which node to use for every service in order to assign 1 service in every node.
Yes you can do this with the "deploy" configuration option in your compose file. For example:
deploy:
placement:
constraints:
- "node.hostname == desired_machine_hostname"

Not deploy container on master node in Docker Swarm

I am working on a project which uses Raspberry Pis as worker nodes and my laptop as the master node. I hope to control the deployment of my containers from my laptop, but I hope the containers run on the worker nodes only(which means no container on the master node). How can I do it with Docker Swarm?
I am going to presume you are using a stack.yml file to describe your deployment using desired-state, but docker service create does have flags for this too.
There are a number of values that docker defines that can be tested under a placement-constraints node:
version: "3.9"
service:
worker:
image: nginx
deploy:
placement:
constraints:
- node.role==worker

Website available in standalone container, not in swarm

I have Docker CE running on windows server 2016 with 2 images.
when I run these in containers, everything is fine.
docker run --detach --name Website1 --publish 94:94 webimage1
docker run --detach --name Website2 --publish 95:95 webimage2
I can access through browser on other PCs:
http://host:94/page1.aspx
http://host:95/page1.aspx
Now I want to run them in swarm.
I've gone through docker tutorial and 've set up docker-compose file, with services, port mapping. The setup has Website 1 of 1 replica, Website 2 of 2 replicas.
On docker stack services websites port numbers show up as follows.
Website1: *:94->94/tcp
Website2: *:95->95/tcp
but i can't access any of them with following url's:
http://host:94/page1.aspx
http://host:95/page1.aspx
I get - This site can't be reached
If I go back to one of my running containers, I see that the port number has a different format.
0.0.0.0:94->94/tcp (WORKING) VS *:94->94/tcp (NOT WORKING)
For initilizing docker swarm I used docker swarm init with IP address of the host on port 2377.
Here is how I deployed docker stack using compose file
docker stack deploy --compose-file docker-stack.yml websites
docker-stack.yml file for reference.
version: "3"
services:
website1:
image: website1:latest
ports:
- 94:94
depends_on:
- website2
deploy:
replicas: 1
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
website2:
image: website2:latest
ports:
- 95:95
deploy:
mode: replicated
replicas: 2
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 120s
any guidance would be greatly appreciated.
many thanks
From the conversations in the comments the problem is with the stack.
Docker stack supports only images for deployment and replicas. I believe your images are custom ones which are neither in dockerhub nor in any private repo. So when you try to deploy it in stack the service which is getting deployed in the worker node doesn't find such image and is unable to download it from repo. Hence the service won't start in worker node. It works perfectly in manager node because the image already exists there.
So, you have to either set up a local/private registry or push the images to docker registry or else you can even copy the images from manager node to worker node using docker save and docker load and then try using swarm and deploy in stack it will work.
Please note when working with swarm and registries, While deploying stack using docker stack deploy -c composefile.yml test you have to pass --with-registry-auth if you are using authentication for registries as docker stack deploy -c composefile.yml test --with-registry-auth else other nodes may not authenticate with the registry which will result in failure to download images if not found.
Also please note if you set up a local private repo without self signed certificate or with self signed certificate you may need to configure insecure registry. I've given reference of the same.
I recommend setting up a local repo without any authentication and certificates and access it by adding insecure registry in daemon.json file for testing purposes.
Now as per the last comment where you removed swarm and tried running docker service using
docker service create --replicas 2 --name contentlinksapi --publish mode=host,target=94,published=94,protocol=tcp contentlinksapi
It throwed port already in use because it tries to create 2 replicas in the same machine. Where the first replica binds to port 94 because of which, The second replica throws port already in use error.
For your reference.
Deploy a registry server
Test an insecure registry
Docker service mode (check to know why services with two replicas deployed in same host on docker service create)
Docker save
Docker load

Can docker swarm stack distribute services averagely to all nodes?

I have:
three 1 swarm manager and 2 swarm working nodes
an application cluster that connected to each other
docker-compose.yml
services:
service1:
ports:
- 8888:8888
environment:
- ADDITIONAL_NODES=service2:8889,service3:8890
service2:
ports:
- 8889:8889
environment:
- ADDITIONAL_NODES=service1:8888,service3:8890
service3:
ports:
- 8890:8890
environment:
- ADDITIONAL_NODES=service1:8888,service2:8889
If I run docker stack deploy -c docker-compose.yml server :
swarm manager(service1), swarm node1(service2), swarm node2(service3)
swarm manager(service1、service2、service3), swarm node1(service1、service2、service3), swarm node3(service1、service2、service3)
Which one will be the result?
If it is 2, how can I deploy like 1 using docker swarm? I need to use docker swarm because I'm also using docker network overlay.
If it is 1, then how does my services distributed? Is it "averagely" distributed? If true then in what perspective is it "averagely" distributed?
Docker swarm has some logic which it uses to decide which services run on which nodes. It might now be 100% what you expect but they are smart people working on this and might consider things that you don't (such as CPU load, available ram....)
The goals is to spread the load evenly so like your example 1. If some services can for some reason not start on one node (like you use a private registry but didn't specify --with-registry-auth in stack deploy) then the services will all start on those nodes who can run them after failing on the other nodes.
From personal experience I can tell you that it spreads tasks nicely accross the swarm but theres no guarantee where which service ends up.
If you want to force where services run use constraints.

How do I run docker-compose up on a a docker-swarm?

I'm new to Docker and trying to get started by deploying locally a hello-world Flask app on Docker-Swarm.
So far I have my Flask app, a Dockerfile, and a docker-compose.yml file.
version: "3"
services:
webapp:
build: .
ports:
- "5000:5000"
docker-compose up works fine and deploys my Flask app.
I have started a Docker Swarm with docker swarm init, which I understand created a swarm with a single node:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
efcs0tef4eny6472eiffiugqp * moby Ready Active Leader
Now, I don't want workers or anything else, just a single node (the manager node created by default), and deploy my image there.
Looking at these instructions https://docs.docker.com/get-started/part4/#create-a-cluster it seems like I have to create a VM driver, then scp my files there, and ssh to run docker-compose up. Is that the normal way of working? Why do I need a VM? Can't I just run docker-compose up on the swarm manager? I didn't find a way to do so, so I'm guessing I'm missing something.
Running docker-compose up will create individual containers directly on the host.
With swarm mode, all the commands to manage containers have shifted to docker stack and docker service which manage containers across multiple hosts. The docker stack deploy command accepts a compose file with the -c arg, so you would run the following on a manager node:
docker stack deploy -c docker-compose.yml stack_name
to create a stack named "stack_name" based on the version 3 yml file. This command works the same regardless of whether you have one node or a large cluster managed by your swarm.

Resources