Docker Swarm: deploy containers on Worker node - docker

I am trying to deploy the application on multiple instances.
On master node, I used these bunch of commands:
docker swarm init
docker network create --attachable --driver overlay fabric
docker stack deploy --compose-file docker-compose-org2.yaml fabric
And the service was deployed on master node and is running properly.
Now I have another compose file named: docker-compose-orderer.yaml Which I want to deploy on other AWS instance.
I used the following command on worker node:
docker swarm join --token SWMTKN-1-29jg0j594eluoy8g86dniy3opax0jphhe3a4w3hjuvglekzt1b-525ene2t4297pgpxp5h5ayf89 <IP>:2377
docker stack deploy --compose-file docker-compose-org1.yaml fabric
It command docker stack deploy --compose-file docker-compose-org1.yaml fabric says this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
Does anyone knows how to deploy the compose file in worker node?
Any help/suggestion would be appreciated.
Update 1:
Worker node joined swarm manager successfully.
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
qz9y7p1ba3prp23xtuv3uo2dk ip-172-31-18-206 Ready Active 18.06.1-ce
no97mrg6f7eftbbeu86xg88d9 * ip-172-31-40-235 Ready Active Leader 18.06.1-ce

you must apply all docker service and docker stack commands on manager nodes. it will deploy automatically the containers on less used nodes. when you want to explicit deploy a container on a specific node, you must tag this node and work with constraints.

Related

How to deploy a compose file in docker swarm which is present in Worker node

In system1(i.e Host name of Master node), the docker is started using
docker swarm init
And later the Compose file available in system1 (*.yml) are deployed using
docker stack deploy --compose-file file_1.yml system1
docker stack deploy --compose-file file_2.yml system1
docker stack deploy --compose-file file_3.yml system1
Next in system2 (i.e Host name of Worker node),
Will join the manager node (system1) using join --token command.And using below mentioned command,and later copy the output of that command and join the manager node.
docker swarm join-token worker
And once ran output of the above command in system2.Was able to successfully join the manager node.
Also cross verified by using ,
docker node ls
And I could see both manager node and worker in Ready and active state.
In my case I'm using worker node(system2) for failover .
Now that I have similar compose files (*.yml files) in system2.
How do I get that deployed in docker swarm ?
Since system2 is worker node, I cannot deploy in system2.
At first I'm not sure what do you mean by
In my case I'm using worker node(system2) for failover .
We are running Docker Swarm in production and the only way you can achieve failover with managers is to use more of them. Note because Docker Swarm uses etcd and that uses quorum, go with the rule of 1,3,5 ...
As for deployments from non-manager nodes, it is not possible to do so in Docker Swarm unless you use a management service which has a docker socket proxy and it can work with it through a service (service will be running on the manager and since it all lives inside Docker Swarm you can then invoke the calls from the worker.).
But there is no way to directly deploy or administrate the swarm from the worker node.
Some things:
First:
Docker contexts are used to communicate with a swarm manager remotely so that you do not have to be on the manager when executing docker commands.
i.e. to deploy remotely to a swarm you could create then use a context like this:
docker context create swarm1 --docker "host=ssh://user#node1"
docker --context swarm1 stack deploy --compose-file stack.yml stack1
2nd:
Once the swarm is set up, you always communicate with a manager node, and it orchestrates the deployment of the service to available worker nodes. In the case that worker nodes are added after services are deployed docker will not move tasks to the worker nodes until new deployments are performed as it prefers to not interrupt running tasks. The goal is eventual balance. If you want to force a docker to rebalance to consider the new worker node immediately, then just redeploy the stack, or
docker service update --force some-service
3rd:
To control which worker nodes services run tasks on you can use placement constraints and node labels.
docker service create --constraint node.role==worker ... would only deploy onto nodes that have the worker role (are not managers)
or
docker service update --constraint-add "node.labels.is-nvidia-enabled==1" some-service would only deploy tasks to the node where you have explicitly labeled the node with the corresponding label and value.
e.g. docker node update label-add is-nvidia-enabled=1 node1 node3

How to setup docker swarm in distributed server?

I have set up single host docker deployment using docker-compose. But now I have 4 server instances running on vultr each one has different services running.
For example,
Server 1: mongodb
Server 2: node/express
Server 3: redux
Server 4: load balancer
How can I connect all these services using docker swarm?
You should create swarm of nodes using docker swarm init and docker swarm join. Each node is docker engine installed on a different host. If you have just 4 hosts you can decide that all nodes will be managers.
Then you should deploy docker stack which will deploy your docker
services (mongodb, etc...) using the docker-compose.yml file: docker stack deploy --compose-file docker-compose.yml
Docker services will run on all nodes according to the number of
replicas you specify when you create each service.
If you want each service to run on specific node, assign labels for
each node and add service constraints.

How to balance containers on newly added node with same elastic IP?

I need help in distributing already running containers on the newly added docker swarm worker node.
I am running docker swarm mode on docker version - 18.09.5. I am using AWS autoscaling for creating 3 masters and 4 workers. For high availability, if one of the workers goes down, all the containers from that worker node will be balanced on other workers. When autoscaling brings new node up, I am adding that worker node to the current docker swarm setup using some automation. But docker swarm is not balancing containers on that worker node. Even I tried to deploy the docker stack again, still swarm is not balancing the containers. Is it because of different node id? How can I customize it? I am using docker compose file deploying stack.
docker stack deploy -c dockerstack.yml NAME
The only (current) to force re-balancing, is to force-update the services. See https://docs.docker.com/engine/swarm/admin_guide/#force-the-swarm-to-rebalance for more information.

Docker swarm cluster how to add manager nodes as a reachable

I am using docker virtual box for windows 7 machine.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
default * virtualbox Running tcp://1.2.3.101:2376 v17.04.0-ce
manager1 - virtualbox Running tcp://1.2.3.106:2376 v17.04.0-ce
manager2 - virtualbox Running tcp://1.2.3.105:2376 v17.04.0-ce
worker1 - virtualbox Running tcp://1.2.3.102:2376 v17.04.0-ce
worker2 - virtualbox Running tcp://1.2.3.104:2376 v17.04.0-ce
worker3 - virtualbox Running tcp://1.2.3.103:2376 v17.04.0-ce
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
e8kum3w0xqd4g02cx1tfps9ni manager1 Down Active
aibbgvqtiv9bhzbs8l20lbx2m * default Ready Active Leader
sbt75u8ayvf7lqj7y3zppjwvk worker1 Ready Active
ny2j5556w4tyflf3tjfqzjrte worker2 Ready Active
veipdd0qs2gjnogftxvr1kfhq worker3 Ready Active
Now i am planing set up environment docker swarm cluster, like i have three manager node (name as default,manager1,manager2) and three workers nodes (name as worker1, worker2,worker3).
Using default manager node i init docker swarm with address
$ docker swarm init --advertise-addr 1.2.3.101:2376
output starting
Swarm initialized: current node (acbbgvqtiv6bhzbs8l20lbx1e) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-1ie1b420bhs452ubt4iy01brfc97801q0ya608spbt0fnuzkp0-1h2a86acczxe4qta164np487r 1.2.3.101:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
output ending
Using this output i easily added worker nodes. Now my question is how yo add other manager (manager1,manager2) to reachable state. Note still default node act as leader
could you please any one help on this ?
Thanks
Sorry for late answer.
On the existing manager host get manager-token:
>docker swarm join-token manager
and then on a potential manager host execute got output
Run command on manager node
docker swarm join-token manager
to get the token to add other nodes as manager, should be similar to the worker token you got above
You need to ssh to the other machine which you want to add as a manager node to the swarm.
Once done, run that command
For the manager to advertise address you can provide --advertise-addr and --listen-addr flags as well, they take host:port as param.
Hope this helps

What is the difference between docker Swarm and Swarm mode?

I was wondering if anyone could differentiate between these two.Both of them have similar naming.
Docker Swarm is a separate product which you can use to cluster multiple Docker hosts. Prior to Docker version 1.12 it was the only native Docker option for clustering hosts, and it needed a lot of additional setup for distributed state, service discovery and security.
With Docker 1.12, Swarm Mode is built into Docker Engine. To run a cluster you just need to install Docker on multiple machines, run docker swarm init to switch to Swarm Mode and docker swarm join to add more nodes to the cluster. State, discovery and security are all included with zero setup.
Swarm Mode is optional, but if you want to run several Docker hosts it's the preferred way. You get reliability, load-balancing, scaling, and rolling service upgrades in 1.12, and it's likely that the bulk of new features will go into Swarm Mode. The original Docker Swarm product will probably only have maintenance updates in the future (although Swarm is open source, just like Docker Engine).
Docker Swarm (also Swarm classic) is fundamentally different from Swarm Mode. Native Swarm functionality will continue to be supported in Docker 1.12 release, this is done to preserve backward compatibility.
Docker Swarm (classic):
Separate from Docker Engine and can run as Container
Needs external KV store like Consul, etcd, Zookeeper
Usage example:
docker run swarm manage <consul-ip>
docker -H <worker-ip> run swarm join --advertise=<worker-ip> <consul-ip>
Swarm Mode (new, preferable):
Integrated inside Docker engine
No need of separate external KV store
Usage example:
docker swarm init --advertise-addr <manager-ip>
docker -H <worker-ip> swarm join --token <worker-token>
Docker Swarm:
Docker swarm is a service which allows users to create and manage a cluster of docker nodes and schedule container. Each node in docker swarm is a docker daemon and docker daemon interact using docker API.
Swarm Mode:
When we create a cluster of one or more Docker Engines its called a swarm mode. Swarm mode was introduced in Docker Engine 1.12. A swarm consists of one or more nodes physical or virtual machines running Docker Engine.

Resources