able to scale container with global mode in docker? - docker

I have three swarm nodes.
Deployed containerized service with mode "global" through docker swarm.
Later, add one more swarm node to current to be four nodes.
How can i deploy the container service to new added nodes?
The command(docker service scale) only be used with "replicated" mode.

I would recommend running the following command if you have a lot of services running. This command will rebalance all services evenly across all the docker nodes.
docker service ls -q > docker_services && for i in `cat docker_services`; do docker service update "$i" --detach=false --force ; done
In case you have only one service then use this command
docker service update <service_name>

Related

Docker container cannot reach other services for a few seconds

I have a docker swarm node running a set of docker services connected by a overlay network. When needed I dynamically add another docker node via terraform . It'll be a separate ec2 instance setup and connected as a worker node to the existing swarm network.
I'll run a container from my manager and the running container needs to talk to the existing services in manager node. For eg: Connecting to postgres service and running few queries.
docker -H <node ip> run --network <overlay network where services are running> <some image> <command>
The script running in the container fails with "Name or service not known" error. I tried to manually ping by bashing into the container and ping succeeds after some 4 or 5 seconds. I tried this hundreds of times and I always get the same issue. Also, it doesn't matter when the node is joined to the swarm. Every time I run the above command, I face the same issue.
Also, I don't have control over what script is run in the container so I cannot add retries.
One more thing. Sometimes, some services can be reached immediately. For eg., Postgres will fail. But another service exposing rest end points can be reached. But it's not always the case.
I was able to reproduce this issue with a bunch of test services:
Steps to reproduce the issue:
Create a docker swarm and add another machine as a worker node to
docker swarm
Create a overlay network in node 1 docker network create -d overlay --attachable platform
Create services in node 1 for i in {1..25} do docker service create --network platform -p :80 --name "service-${i}"
dockerbogo/docker-nginx-hello-world done
Create a task from node 1 to be run in node 2 docker -H 10.128.0.3:2376 run --rm --network platform centos ping service-1
Docker daemon logs: https://pastebin.com/65Lihp8v
Any help?

Docker: Difference between `docker run` and `docker service`

I am very new to docker , just started venturing into this. I read online about this. I came to know of the following commands of docker which is: docker run and docker service. As I understood , with docker run we are spinning a new container. However I am not clear what docker service do? Does it spin container in a Swarm?
Can anyone help understand in simple to understand?
The docker run command creates and starts a container on the local docker host.
A docker "service" is one or more containers with the same configuration running under docker's swarm mode. It's similar to docker run in that you spin up a container. The difference is that you now have orchestration. That orchestration restarts your container if it stops, finds the appropriate node to run the container on based on your constraints, scale your service up or down, allows you to use the mesh networking and a VIP to discover your service, and perform rolling updates to minimize the risk of an outage during a change to your running application.
Docker Run vs Docker service
docker run:
we can create number of containers with different images.
docker service:
we can create number of containers with same image in a single command line.
SYNTAX:
docker service create --name service-name --network network-name --replicas number-of-containers image-name
EXAMPLE:
docker service create --name service1 --network swarm-net --replicas 5 redis

What is the difference between Docker Service and Docker Container?

When do we use a docker service create command and when do we use a docker run command?
In short: Docker service is used mostly when you configured the master node with Docker swarm so that docker containers will run in a distributed environment and it can be easily managed.
Docker run: The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
That is, docker run is equivalent to the API /containers/create then /containers/(id)/start
source: https://docs.docker.com/engine/reference/commandline/run/#parent-command
Docker service:
Docker service will be the image for a microservice within the context of some larger application. Examples of services might include an HTTP server, a database, or any other type of executable program that you wish to run in a distributed environment.
When you create a service, you specify which container image to use and which commands to execute inside running containers. You also define options for the service including:
the port where the swarm will make the service available outside the swarm
an overlay network for the service to connect to other services in the swarm
CPU and memory limits and reservations
a rolling update policy
the number of replicas of the image to run in the swarm
source: https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/#services-tasks-and-containers
docker run command is used to create a standalone container
docker service create command is used to create instances (called tasks) of that service running in a cluster (called swarm) of computers (called nodes). Those tasks are containers of course, but not standalone containers. In a sense a service acts as a template when instantiating tasks.
For example
docker service create --name MY_SERVICE_NAME --replicas 3 IMAGE:TAG
creates 3 tasks of the MY_SERVICE_NAME service, which is based on the IMAGE:TAG image.
More information can be found here
Docker run will start a single container.
With docker service you manage a group of containers (from the same image). You can scale them (start multiple containers) or update them.
You may want to read "docker service is the new docker run"
According to these slides, "docker service create" is like an "evolved" docker run. You need to create a "service" if you want to deploy a container to Docker Swarm
Docker services are like "blueprints" for containers. You can e.g. define a simple worker as a service, and then scale that service to 20 containers to go through a queue really quickly. Afterwards you scale that service down to 3 containers again. Also, via Swarm these containers could be deployed to different nodes of your swarm.
But yeah, I also recommend reading the documentation, just like #Tristan suggested.
You can use docker in two way.
Standalone mode
When you are using the standalone mode you have installed docker daemon in only one machine. Here you have the ability to create/destroy/run a single container or multiple containers in that single machine.
So when you run docker run; the docker-cli creates an API query to the dockerd daemon to run the specified container.
So what you do with the docker run command only affects the single node/machine/host where you are running the command. If you add a volume or network with the container then those resources would only be available in the single node where you are running the docker run command.
Swarm mode (or cluster mode)
When you want or need to utilize the advantages of cluster computing like high availability, fault tolerance, horizontal scalability then you can use the swarm mode. With swarm mode, you can have multiple node/machine/host in your cluster and you can distribute your workload throughout the cluster. You can even initiate swarm mode in a single node cluster and you can add more node later.
Example
You can recreate the scenario for free here.
Suppose at this moment we have only one node called node-01.dc.local, where we have initiated following commands,
####### Initiating swarm mode ########
$ docker swarm init --advertise-addr eth0
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
####### create a standalone container #######
[node1] (local) root#192.168.0.8 ~
$ docker run -d --name app1 nginx
####### creating a service #######
[node1] (local) root#192.168.0.8 ~
$ docker service create --name app2 nginx
After a while, when you feel that you need to scale your workload you have added another machine named node-02.dc.local. And you want to scale and distribute your service to the newly created node.
So we have run the following command on the node-02.dc.local node,
####### Join the second machine/node/host in the cluster #######
[node2] (local) root#192.168.0.7 ~
$ docker swarm join --token SWMTKN-1-21mxdqipe5lvzyiunpbrjk1mnzaxrlksnu0scw7l5xvri4rtjn-590dyij6z342uyxthletg7fu6 192.168.0.8:2377
This node joined a swarm as a worker.
Now from the first node I have run the followings to scale up the service.
####### Listing services #######
[node1] (local) root#192.168.0.8 ~
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
syn9jo2t4jcn app2 replicated 1/1 nginx:latest
####### Scalling app2 from single container to 10 more container #######
[node1] (local) root#192.168.0.8 ~
$ docker service update --replicas 10 app2
app2
overall progress: 10 out of 10 tasks
1/10: running [==================================================>]
2/10: running [==================================================>]
3/10: running [==================================================>]
4/10: running [==================================================>]
5/10: running [==================================================>]
6/10: running [==================================================>]
7/10: running [==================================================>]
8/10: running [==================================================>]
9/10: running [==================================================>]
10/10: running [==================================================>]
verify: Service converged
[node1] (local) root#192.168.0.8 ~
####### Verifying that app2's workload is distributed to both of the ndoes #######
$ docker service ps app2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
z12bzz5sop6i app2.1 nginx:latest node1 Running Running 15 minutes ago
8a78pqxg38cb app2.2 nginx:latest node2 Running Running 15 seconds ago
rcc0l0x09li0 app2.3 nginx:latest node2 Running Running 15 seconds ago
os19nddrn05m app2.4 nginx:latest node1 Running Running 22 seconds ago
d30cyg5vznhz app2.5 nginx:latest node1 Running Running 22 seconds ago
o7sb1v63pny6 app2.6 nginx:latest node2 Running Running 15 seconds ago
iblxdrleaxry app2.7 nginx:latest node1 Running Running 22 seconds ago
7kg6esguyt4h app2.8 nginx:latest node2 Running Running 15 seconds ago
k2fbxhh4wwym app2.9 nginx:latest node1 Running Running 22 seconds ago
2dncdz2fypgz app2.10 nginx:latest node2 Running Running 15 seconds ago
But if you need to scale your app1 you can't because you have created the container with standalone mode.

Difference between docker restart & docker-compose restart

I'm using docker-compose.yml to setup docker containers. And I have started the services using docker-compose up -d.
Now every time I deploy the application to the server I need to restart one of the services.
Previously I used to run the container without docker-compose using just the docker run command like this: docker run --name test-mvn -v "$(pwd)":/usr/src/app test/mvn-spring-boot -d.
And to restart the container I used to do docker restart test-mvn.
But now there are two options out there docker-compose restart and docker restart. I'm not sure which one I should prefer.
I want to know what is the difference between these two options and which one I should use in my case.
With docker-compose you manage a services, typically constituting multiple containers, while docker manages individual containers. Thus docker-compose restart will restart all the containers of a service and docker restart only the given containers.
Assuming "one of the services" in your question refers to an individual container I would suggest docker restart.

How to setup multi-host networking with docker swarm on multiple remote machines

Before asking this question I have read quiet of articles and stackoverflow questions but I couldn't get the right answer for my setup(perhaps it is already answered). Here is the architecture I have been struggling to get it to work.
I have three physical machines and I would like to setup the Docker swarm with multi-host networking so that I can run docker-compose.
For example:
Machine 1(Docker Swarm Manager and Contains Consoul)(192.168.5.11)
Machine 2(Docker Swarm Node)(192.168.5.12)
Machine 3 (Docker Swarm Node)(192.168.5.13)
And I need to run docker-compose from any other separate machine.
I have tried Docker article but in that article it is all setup under the same physical machine using docker-machine and virtual box. How can I achieve above in three remote machines. Any help appreciated.
The latest version of Docker has Swarm Mode built in, so you don't need Consul.
To set up on your boxes, make sure they all have docker version of 1.12 or higher and then you just need to initialise the swarm and join it.
On Machine 1 run:
docker swarm init --advertise-addr 192.168.5.11
The output from that will tell you the command to run on Machine 2 and 3 to join them to the swarm. You'll have a unique swarm token, and the command is something like:
docker swarm join \
--token SWMTKN-1-49nj1... \
192.168.5.11:2377
Now you have a 3-node swarm. Back on Machine 1 you can create a multi-host overlay network:
docker network create -d overlay my-app
And then you run workloads in the network by deploying services. If you want to use Compose with Swarm Mode, you need to use distributed application bundles - which are currently only in the experimental build of Docker.
I figured this needs an update, as docker compose files are supported in docker swarm
Initialize the swarm on Machine 1 using
docker swarm init --advertise-addr 192.168.5.11
Join the swarm from Machine 2 & 3 using
docker swarm join \
--token <swarm token from previous step> 192.168.5.11:2377 \
--advertise-addr eth0
eth0 is the network interface on machines 2 & 3, & could be different
based on your config. I found that without the --advertise-addr
option, containers couldn't talk to each other across hosts.
To list all the nodes in the swarm & see their status
docker node ls
After this, deploy the stack (group of services or containers) from a compose file
docker stack deploy -c <compose-file> my-app
This will create all the containers across multiple hosts
To list services (containers) on the swarm run docker service ls
See docker docs Getting started with swarm mode

Resources